Skip to end of metadata
Go to start of metadata

Overview

Develop a system for testing and logging Rice performance in a controlled environment on a per-commit basis.  Identify long-term performance trends and alert interested parties when established thresholds are exceeded.

Scope

We currently run the following performance tests against env14:

  • labs_Lab-PerformanceMedium_Lab-Performance-Page1.jmx
  • labs_Lab-PerformanceMedium_Lab-Performance-Page2.jmx
  • uicomponents_UifCompView_UifCompView-Page1.jmx
  • uicomponents_UifCompView_UifCompView-Page10.jmx
  • uicomponents_UifCompView_UifCompView-Page11.jmx
  • uicomponents_UifCompView_UifCompView-Page2.jmx
  • uicomponents_UifCompView_UifCompView-Page3.jmx
  • uicomponents_UifCompView_UifCompView-Page4.jmx
  • uicomponents_UifCompView_UifCompView-Page5.jmx
  • uicomponents_UifCompView_UifCompView-Page6.jmx
  • uicomponents_UifCompView_UifCompView-Page7.jmx
  • uicomponents_UifCompView_UifCompView-Page8.jmx
  • uicomponents_UifCompView_UifCompView-Page9.jmx

The data from these tests is not as meaningful as it should be, due to the unpredictable nature of a shared environment.  Therefore, we will create a dedicated performance testing environment, and adapt the existing tests to run against it. 

Once this foundation is in place, we will continue to expand our suite of automated performance tests and monitor these tests for long- and short-term degradation.

Requirements

Hardware

(To be selected from these configurations: http://aws.amazon.com/ec2/instance-types/#selecting-instance-types.)

Software

  • Rice KRAD Sample App
  • Rice Sample App
  • DB: Oracle or MySQL

Automation Tools

We intend to use the following tools in our performance testing efforts:

  • JMeter
  • Selenium
  • Saucelabs
  • AppDynamics
  • Jenkins

We are presently using all of these tools, though not at the scale or with the precision we need. 

Environment Setup

Dedicated performance test environment with enough memory to run Tomcat with 2 gigabytes (e.g., env14).

Approach

Our first task will be to develop baseline performance standards on a dedicated performance testing environment.  With our current shared setup it is impossible to consistently determine whether performance aberrations are due to changes in the app, or to interference from other activities.

Using the above listed tools, we will run existing CI JMeter jobs on the dedicated environment, both per-commit and nightly. The system will notify interested individuals when performance thresholds are exceeded.

We currently have these systems in place and running against env14, but we require a controlled environment in order to make the performance data meaningful.  Once a dedicated environment is in place, we will further develop our suite of performance tests, increasing coverage and precision.

Script Development

We will continue to follow the process outlined here to create Automated Functional Tests (AFTs).

Once AFTs are created, they are recorded as JMeter tests, and the JMeter tests are updated to use JMeter variables (example).  This process is described in greater detail here.

Test Deliverables

We will capture and deliver performance test results in the form of JMeter graph and trend reports (example). In addition, interested individuals will receive email alerts when performance thresholds are exceeded.

We may also make use of AppDynamics to generate detailed reports on performance bottlenecks.

Rice performance will be measured against expectations as documented here.

Rice performance results will be documented here: Rice Performance Historical Results.

Tasks and Delivery Estimates

TaskDelivery Estimate
Create dedicated performance testing environment. 
Point existing AFTs to dedicated performance testing environment.1 hour
Create Transaction performance test.4 hours
Create Action List performance test.4 hours
Institute method to bounce Tomcat without requiring redeployment. 
Institute process for gathering performance test logs. 

Supplemental

  • No labels