Lots of people write articles on what they think are “best practices” in performance testing, but I think that it is also good to spend some time thinking about what projects do wrong.

This is a list of problems I have seen again and again on different projects over the years. They are mostly management-related problems, rather than technical problems, so I would consider this a guide for Test Managers and Project Managers, rather than performance testers (maybe they can print it out and pin it to their cubicle wall though).

Here are some of the most common things projects do that make performance testing difficult or less productive…


  • Forget to include Performance Testing in the project plan (or budget)
    It must be a horrible feeling to be a Project Manager and realise, towards the end of the project, that you need to do performance testing but have not allowed any time in the project plan, and have not included it in the project budget. When weighing up the consequences of the system falling in a screaming heap on day one, remember that the very worst kind of performance testing a project can do is no testing at all.
  • Treat Performance Testing as a checkbox
    Project Managers who have been burnt by a poorly performing or unstable application in the past take performance testing very seriously. Some project managers who haven’t had that experience treat it as another checkbox they need to tick, and are not concerned whether the performance testing is done well or not.
  • Wear your rose-coloured glasses
    There’s a saying “hope for the best, plan for the worst”, but so many people never get past the first half of the saying. Whether it is misplaced optimism, or simply lack of imagination, planning a performance test phase with no time or resources to fix any non-trivial problems that are found is a recipe for schedule slippage. The power of positive thinking can’t make your application work better.
  • Don’t bother to define pass/fail criteria
    Even projects that explicitly plan a performance testing phase, frequently forget to actually define non-functional requirements. Plans with fuzzy objectives like tuning “until it is fast enough” or testing with “the most load the system can support” are not helpful. If a project doesn’t know what its requirements are at the start of testing, it is unlikely to suddenly have a crystal clear idea of what they are when performance test results are presented.
  • Stick to the requirements, even when they don’t make sense
    You should know by now that projects aren’t very good at defining their non-functional requirements. This means that it is necessary to use a certain amount of common sense when applying them to the performance test results. As an example, imagine that a response time requirement specified that average response times for login should be less than 5 seconds. During performance testing, it is found that 90% of login transactions take 1 second, but 10% take 40 seconds. On average, the response time for the login transaction is 4.9 seconds, so someone who interprets requirements very strictly would consider the response time acceptable, but anyone with good critical thinking skills would see that something is wrong and think to get the intent behind the requirement clarified.
  • Use the wrong people to do your Performance Testing
    A very common mistake is to assume that someone who does functional test automation is necessarily suited to performance testing because they know how to use another tool that is made by the same company. Another mistake is to assume that just because the project has purchased the very best tool money can buy, and it is very easy to use, this will compensate for testers who don’t know anything about performance testing (“fools with tools”). Performance testing is a highly technical activity, and is not a suitable job for anyone who cannot write code, and who does not understand how the system under test fits together.
  • Don’t provide enough business support during test planning
    A performance tester needs support from business experts (SMEs), especially during the planning phase. One of the key pieces of information a performance tester needs is the expected transaction volumes for the new system. This is very much a case of garbage in, garbage out. If the volumes are wrong, the testing will be unrealistic. This means that it might not be a great idea to test with the fanciful transaction volumes that were used for the business justification of the new system, but it is also not good to test with artificially low volumes devised to maximise the chances of the new system passing the performance test phase with no changes.
  • Don’t provide enough technical support to investigate and fix problems
    A good way to ensure that it takes a long time to fix defects is to fail to provide someone who is capable of fixing the problem, or to provide someone who is too busy to work on the problem. Load and performance-related defects are difficult problems, which are not suitable to assign to a junior developer. It is best to make code-related performance problems the responsibility of a single senior developer, so that they have a chance to focus, and are not distracted by all the other (much easier to fix) problems in the buglist.
  • Don’t let performance testers do any investigation themselves
    Having a rigidly enforced line between testers (who find problems), and a technical team (who determine the root cause of a problem, and fix it) doesn’t work so well with performance testing. Performance testers find problems that are impossible for other teams to reproduce themselves (and it’s pretty hard to fix a problem you can’t reproduce). This means that performance testers and technical teams need to work together to determine the root cause of problems. Performance testers can do a lot of this by themselves if they have access to the right information. This means setting up infrastructure monitoring, and providing logons to servers in the test environment and access to any application logs.
  • Poor change control/tracking
    The results of a performance test show the performance characteristics of a specific version of software, with specific configuration settings, running on a particular piece of hardware. If any of these factors change, then the results of the test may be different. This is why it is really important to know exactly what is being tested, and how this compares to the final system in Production. I always expect to have a large number of configuration changes during a performance test cycle, so it is critical to keep track of them. Don’t make the mistake of thinking your can fix this problem with a slow and painful change control process. A performance tuning cycle requires changes to be made quickly, and cannot be bogged down in red-tape.
  • Wishful extrapolation
    Imagine that the test system is two 2-CPU servers, and performance testing shows that it can handle 250 orders per hour. The Production system is two 8-CPU servers, so it should be able to handle 1000 orders per hour, right? Well, not necessarily; this assumes that the system scales linearly, and that CPU is the only bottleneck. Both are bad assumptions. It is best to test in a Production-like environment, or to have a very solid (experimentally proven) knowledge of how your system scales.
  • Ignore high-severity defects for as long as possible
    A long time ago I worked on an SAP order entry system that was to be used by several hundred workers in a call centre. Due to a client-side resource leak, the user interface would hang approximately every 15 minutes when entering orders, forcing the users to restart the application and lose whatever data they had already keyed. This critical problem was not fixed for several months, which effectively prevented me from finding any other load-related problems during that time (effectively reducing the time available for performance testing). The great computer scientist Donald Knuth’s philosophy was “get it working first, and then get it working fast”, but he didn’t mean “get it working perfectly“. Defects should be addressed according to their severity, rather than the test phase they were discovered in. A severity 2 defect discovered during performance testing should be fixed before a severity 3 defect discovered during functional testing. Fortunately, any non-trivial project has multiple developers, who are able to all work on different problems in parallel, so there is no good reason to ignore defects found during performance testing.
  • Hide problems
    One of the main reasons for software testing is so that the Business stakeholders can make an informed decision about whether a new system is ready to “go live”. Often performance testers are put under pressure to downplay the severity or likelihood of any problems in their Test Summary Report. This is usually due to a conflict of interest; perhaps performance testing is the responsibility of the vendor (who is keen to hit a payment milestone), or the maybe the project manager is rewarded for hitting a go-live date more than for deploying a stable system. In either case, it is a bad outcome for the Business.

As usual, if you think of anything that should be added to this list, please leave a comment.

7 comments on “Worst Practices in Performance Testing

  1. Test managers always want to re-write my reports. The last one wanted me to remove every occurance of the word “defect” from the report, and replace it with “issue”. :-)

    Also, this Tech Tip reminded me of a quote from Douglas Adams…
    Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.

  2. Often a symptom of the first three items above; is that performance testing is often left to the very last week or two before the go-live date.
    It is a very good idea to performance test an application as early as possible, particularly if the application has been highly-customized. Some integration tools even allow you do run a performance test during code build cycles.
    Defects raised during performance testing tend to be architectural in nature which can result in fundamental code changes, and therefore much code or configuration rework. Effective projects tend to include a full performance review cycle within their solution design phase which can very beneficial in mitigating the risk of significant performance defects being raised.

  3. SMEs are also the folks that commonly know how the business will perform various different transactions. So they identify the transaction steps to the Performance Tester. Otherwise in large systems testers may invoke a business process that is not used by the business.

  4. Another example for “Stick to the requirements, even when they don’t make sense”

    When there is only one response time threshold for all applications/transactions, e.g. 3 or 5 seconds.

    Sometimes it is useless for the transactions that handle big volumes of data. (e.g. search, export, upload, download transactions).

    I think, for such cases there should be defined appropriate response time thresholds.

  5. Often a symptom of the first three items above; is that performance testing is often left to the very last week or two before the go-live date. says:

    Often a symptom of the first three items above; is that performance testing is often left to the very last week or two before the go-live date.

  6. Excellent list Stuart.

    I’ll add a couple more if I may:

    1. Insist on running tests when there are known issues with the environment, therefore compromising the validity of the results.

    2. Changing the scope of testing to cater for (bad) results, rather than objectively analysing the results and considering what they mean.

  7. About sharing information between performance testers and developers. I think, that it is very important to have central storage for test results, that both developers and testers have access to. Also it can be good practice(in case with web-application that can be accessed from Internet) to monitor performance characteristics. From this point of view, using cloud testing services, such as http://blazemeter.com can be very usefull, because:
    – you can execute performance tests from time to time against application on production;
    – you can have one account for managers, testers and developers, e.g. everyone can see latest results;

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>