Authored by Kara Manton
Pega recommends testing for performance throughout the application development lifecycle. Discovering and remediating performance problems close to go-live is disruptive and expensive. To avoid this, make performance testing part of your sprint deliverables and routinely consider memory, CPU and the complexity and frequency of SQL queries:
Memory
The memory footprint of a user is determined by the size of individual cases multiplied by the number of cases that are open during a working session. Memory footprint on the server is critical for sizing a system to accommodate expected user load while leaving sufficient head room for occasional traffic spikes. Out of Memory errors cause nodes to crash and result in disruptions to end users who lose work in progress.
Use the “Performance” tool in Developer Studio to examine the clipboard (memory) usage from screen to screen. Identify key scenarios and evaluate them step by step to examine memory usage. Run this twice to validate that memory utilization is consistent to ensure there are no leaks caused by pages accumulating in memory.
Steps
- Reset the performance data
- Execute the first screen in your scenario
- Utilize the “Add reading with clipboard size” button
- Open the delta snapshot that was last added and examine the requestor clipboard size to determine if it is in threshold.
- Repeat this process for each screen in the scenario.
Metric: Requestor Clipboard Size
Threshold: Clipboard sizes greater than 5 MB are worthy of closer examination.
Remediation: Investigate large pages and their payload to determine business value of data. Data retrieval is expensive and often more data is retrieved than needed to fulfill a business requirement. Report definitions used to retrieve data should be examined to ensure they are using an optimized WHERE clause that returns only the data needed. Explicitly removing temporary pages (created say for running reports or fetching external data) reduces memory footprint.
CPU
CPU utilization by a user impacts the sizing of a system. Optimizing CPU usage results in faster response times experienced by end users and allows for more users to run on the same hardware.
The performance tool can be used in a similar fashion to the memory examination to examine key metrics that impact the amount of work being done. For each screen in the scenario the user can “add reading” without the clipboard size and open the delta to examine the metric.
Metric: Total number of Rules executed
Threshold: An interaction that executed in excess of 15K rules can likely be optimized.
Remediation: Investigate screens that use a large number of rules as this is an indication of complex or unoptimized logic. Evaluation of this logic may identify areas for improved design that can reduce the cost of execution.
Often a large result set from a data query will cause the system to execute a large number of rules to process each result. This processing can be reduced by examining the business value of the data returned and reducing the size of the result set to the data required to complete the task.
Database Queries / Connectors
The volume and speed of access to the database is often a key factor in the performance of an application. The complexity and frequency of SQL queries are two metrics that should be analyzed.
Complexity of queries (Alerts default threshold is 500. One could use tracer to see every query and its time.)
The performance tool can be used to examine the elapsed time spent in a database executing a report or requesting data from an external system through a connector.
Metric: RDB I/O Elapsed
Threshold: 200 ms for a single query
Remediation: Executing a query against a database that takes over 200 ms can be considered a complex query. One can use tracer to examine each query executed in an interaction and identify the queries that are over threshold. To reduce the complexity, one should narrow the filter criteria and improve the WHERE clause.
Metric: RDB I/O Count
Threshold: 20 queries
Remediation: Excessive queries to the database may be a sign of improper query logic or a database schema that has been poorly designed. Examination of the queries to ensure they are providing the business value needed. Repeated queries to the same table can be reduced by improving the filter criteria of the query.