Ben Simo's post on performance testing is an interesting read. These lessons learned more or less maps to my limited experience of testing application for an often overlooked aspect of functional testers.
This subject is sub categorized into various sections such as performance, scalability, load, volume testing. For the matter of simplicity I'll refer these here under the general term performance testing.Performance is the last thing that appears in many of the requirements document I've come across. One reason for this could be that requirements are created by business analyst or product managers with limited exposure to the software development process. For them, performance is not a requirement, its "implied that the system works to the satisfaction of the end users". For the end user a non performing system is a defective system. While developing a "custom-made solution" for "a specific client" also known as consulting implementations, performance requirement can be defined specifically and these are part of the service level agreements. Here the number of users, data volume, hardware specifications are know before the system is developed. Its often difficult to specify performance requirement when it comes to development of "generic solution" that can be configured in complex ways on multiple operating platforms and implemented at clients ranging from big corporates to small and medium enterprises.
I've seen mixed reaction from developers on the performance requirement. Developers fall into many categories based on their interpretation of the "implicit" requirement. Some ignore requirement and assume their design takes care of performance. Some make implicit assumptions on the performance requirement. "My assumption is on a typical setup about 100,000 records gets uploaded to the transaction_history table by the nightly process", "I feel there may be about 10 users requesting a customer record search". Few others ask the business analyst/Product manager about the performance expectations, they may or many not get the answers they are looking for. With the inputs they get, they design the system to take care of the worst case scenario. Others realize their design design may not scale to the performance expectations but time pressures makes him to attend to more important tasks. There are of course developers who go to great lengths to ensure there code is meets any performance goals.
Testers have their own view of performance. Unless there is a mandate on performance to be tested, testers have these low in their priority. The testers asked to test for performance is functional testers with limited experience with testing for performance. In the iterative development cycles, performance can be tested only after the last iteration is delivered for testing. This leaves very less time to test for performance. Creation of sample data for performance is also a difficult proposition. Functional testers create their own data for the specific scenario or may use a 'limited' set of client data that may be easier to detect functional data integrity issues.
Performance testing needs to be considered as a specialized area of testing and not to be mixed with the area of functional testing. This again depends on the budget of the project. The 'low end' performance tests can be done by testers huddles in a single room banging away on the system with the intend that something may go wrong. This type of testing, very labor intensive may detect many of the concurrency issues. Unless there's development support for this type of effort, the defects detected as a result of this effort are marked as non-reproducible.
A cost conscious project can choose from a wide range of open source tools available for performance testing.