Develop a system for testing and logging Rice performance in a controlled environment on a per-commit basis. Identify long-term performance trends and alert interested parties when established thresholds are exceeded.
We currently run the following performance tests against env14:
The data from these tests is not as meaningful as it should be, due to the unpredictable nature of a shared environment. Therefore, we will create a dedicated performance testing environment, and adapt the existing tests to run against it.
Once this foundation is in place, we will continue to expand our suite of automated performance tests and monitor these tests for long- and short-term degradation.
(To be selected from these configurations: http://aws.amazon.com/ec2/instance-types/#selecting-instance-types.)
- Rice KRAD Sample App
- Rice Sample App
- DB: Oracle or MySQL
We intend to use the following tools in our performance testing efforts:
We are presently using all of these tools, though not at the scale or with the precision we need.
Dedicated performance test environment with enough memory to run Tomcat with 2 gigabytes (e.g., env14).
Our first task will be to develop baseline performance standards on a dedicated performance testing environment. With our current shared setup it is impossible to consistently determine whether performance aberrations are due to changes in the app, or to interference from other activities.
Using the above listed tools, we will run existing CI JMeter jobs on the dedicated environment, both per-commit and nightly. The system will notify interested individuals when performance thresholds are exceeded.
We currently have these systems in place and running against env14, but we require a controlled environment in order to make the performance data meaningful. Once a dedicated environment is in place, we will further develop our suite of performance tests, increasing coverage and precision.
We will continue to follow the process outlined here to create Automated Functional Tests (AFTs).
We will capture and deliver performance test results in the form of JMeter graph and trend reports (example). In addition, interested individuals will receive email alerts when performance thresholds are exceeded.
We may also make use of AppDynamics to generate detailed reports on performance bottlenecks.
Rice performance will be measured against expectations as documented here.
Rice performance results will be documented here: Rice Performance Historical Results.
Tasks and Delivery Estimates
|Create dedicated performance testing environment.|
|Point existing AFTs to dedicated performance testing environment.||1 hour|
|Create Transaction performance test.||4 hours|
|Create Action List performance test.||4 hours|
|Institute method to bounce Tomcat without requiring redeployment.|
|Institute process for gathering performance test logs.|