How to write a Performance Test Plan

Writing a Detailed Test Plan (DTP) before you begin a formal performance test cycle is really important. Often, people just take a functional testing DTP template and modify it. This practically guarantees that they will leave out important information (because functional testing is very different to performance testing).

Over the years, I have had to review a lot of testers' DTPs. Read on for my tips on what to include in your test plan...

Standard Introductory Guff

Every document in a corporate environment needs to have some semi-standard text at the beginning or the document template police will come and arrest you (or something).

  • Introduction - The introduction should include a few words to describe what the project aims to achieve, an indication of why system performance and stability under load is important for the project, a high-level description of the contents of the document, and a statement describing the intended audience.
  • Related Documents - List any related or supporting documents that someone reading your document may need to look up. Examples of useful documents would be: non-functional requirements, solution architecture design, infrastructure design, performance (or overall) test strategy, master test plan, business process descriptions, and capacity planning documents.
  • Change History - List any changes to the document along with the document version number and the name of the person who made the update.
  • Document Signoff - It is good to include a section for reviews and approvals, because most people won't read a document unless they are forced to sign it off. If a company is very strict about signoff, it is good to put sections that may change frequently (timeline, business process descriptions) into separate documents to save you the trouble of getting modified versions of the document signed off again.
  • Distribution List - A list of all the people the document will be sent to (who are not required to approve/sign off the document)


The non-functional requirements presented to performance testers (when they have been defined at all) generally look like they were written on the back of a napkin over a boozy lunch. As they are usually so brief, I like to include a copy of the formally specified requirements in the DTP, even if they are also defined in a stand-alone non-functional requirements document.

This is a good opportunity to highlight any gaps, capture any ad-hoc requirements, and to state your assumptions about how the requirements should be interpreted.

Requirements can be broken down into the following categories:

  • Transaction Volumes - This requirement seems to be called something different at every company (e.g. Transaction Mix, Volumetrics, Application Simulation Model). A workload model for the application should have been developed during the capacity planning phase of the project, and transaction throughput requirements (for peak load) should be defined in the project's NFR document. It's always nice if this is based on real data, as people frequently attempt to reduce the peak transaction volumes used for testing when they find that the system does not work under the defined peak load.
  • Response Times -
  • Error Rates - failover, high availability, interface timeouts,
  • Defect Severity - If you want to define what criteria you will use to addess the severity of any defects you will find, this is a good place to put it. The test manager will have generally included something like this in the Master Test Plan or Test Strategy document, but you can quickly assess whether it is appropriate for use during performance testing by seeing if it gives an indication of severity for (1) a transaction that does not meet a response time target, and (2) a business process that fails intermittently under load - maybe 1 time in 1000.

Test Environment

The environment used for performance testing is so vital to the quality of the
Test Environment

  • Change Management -

Test Scenarios

Because such a small number of test cases are typically executed during a performance test cycle, it is fine to list them all in the test plan.

Test cases should describe the purpose/objective of the test, any dependencies the test case has, how the test should be executed, and the expected result of the test.

Some typical tests types that may be included in your plan might be:

  • Shakeout Test - Check that the test environment is functioning correctly, the load testing tools scripts run, the test data is correct, and the load generation infrastructure and system monitoring is set up correctly.
  • Peak Load Test - Validate the performance characteristics of the system under expected peak transaction volumes.
  • Break Test - Find the amount of load that will cause system failure (however you have defined it), and determine how much headroom/safety margin there is between Peak load and the point of failure.
  • Soak Test -
  • Failover Test -
  • Network Sociability Test -

Test Schedule

  • Dependencies
  • Entry and Exit criteria
  • Key dates

Dependencies (including support from other groups)


Risks and Caveats

Tech tips from JDS