Misplaced Pages

Performance engineering: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 23:02, 13 July 2017 edit (talk | contribs)Extended confirmed users, Pending changes reviewers, Rollbackers43,755 edits Elaboration: remove excessive detail← Previous edit Latest revision as of 16:28, 15 September 2022 edit undoKrinkle (talk | contribs)Extended confirmed users, Pending changes reviewers, Rollbackers5,481 editsNo edit summaryTag: Visual edit: Switched 
(27 intermediate revisions by 19 users not shown)
Line 1: Line 1:
{{Short description|Encompasses the techniques applied during a systems development life cycle}}
{{research paper|date=November 2016}} {{research paper|date=November 2016}}
'''Performance engineering''' encompasses the techniques applied during a ] to ensure the ]s for performance (such as ], ], or ] usage) will be met. It may be alternatively referred to as '''systems performance engineering''' within ], and '''software performance engineering''' or ] within ].


As the connection between application success and business success continues to gain recognition, particularly in the mobile space, application performance engineering has taken on a preventive and perfective<ref>"Banking Industry Lessons Learned in Outsourcing Testing Services," Gartner. August 2, 2012.</ref> role within the software development life cycle. As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.
'''Performance engineering''' or '''SPE''' ('''Systems Performance Engineering''' or '''Software Performance Engineering''') within ], encompasses the set of roles, skills, activities, practices, tools, and deliverables applied at every phase of the ] which ensures that a solution will be designed, implemented, and operationally supported to meet the ]s for performance (such as ], ], or ] usage).


The term ''performance engineering'' encompasses more than just the software and supporting infrastructure, and as such the term performance engineering is preferable from a macro view. Adherence to the non-functional requirements is also validated post-deployment by monitoring the production systems. This is part of ] (see also ]).
It may be alternatively referred to as ''software performance engineering'' or ] within ]. As the connection between application success and business success continues to gain recognition, particularly in the mobile space, application performance engineering has taken on a preventative and perfective<ref>"Banking Industry Lessons Learned in Outsourcing Testing Services," Gartner. August 2, 2012.</ref> role within the software development life cycle. As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.


Performance engineering has become a separate discipline at a number of large corporations, with tasking separate but parallel to systems engineering. It is pervasive, involving people from multiple organizational units; but predominantly within the ] organization.
The term ''performance engineering'' encompasses more than just the software and supporting infrastructure, and as such the term performance engineering is preferable from a macro view. Adherence to the non-functional requirements is also validated post-deployment by monitoring the production systems. This is part of ] (see also ]).

Performance engineering has become a separate discipline at a number of large corporations, with tasking separate but parallel to Systems Engineering. It is pervasive, involving people from multiple organizational units; but predominantly within the ] organization.


==Performance engineering objectives== ==Performance engineering objectives==
Line 16: Line 16:
*Eliminate avoidable system tuning efforts *Eliminate avoidable system tuning efforts
*Avoid additional and unnecessary hardware acquisition costs *Avoid additional and unnecessary hardware acquisition costs
*Reduce increased software maintenance costs due to performance problems in production *Reduce increased ] costs due to performance problems in production
*Reduce increased software maintenance costs due to software impacted by ad hoc performance fixes *Reduce increased software maintenance costs due to software impacted by ad hoc performance fixes
*Reduce additional operational overhead for handling system issues due to performance problems *Reduce additional operational overhead for handling system issues due to performance problems
Line 28: Line 28:


===Elaboration=== ===Elaboration===
During this defining phase, the critical business processes are decomposed to critical ]. Probe cases will be decomposed further, as needed, to single page (screen) transitions. These are the use cases that will be subjected to script driven. During this defining phase, the critical business processes are decomposed to critical ]s. Probe cases will be decomposed further, as needed, to single page (screen) transitions. These are the use cases that will be subjected to script driven ].


The type of requirements that relate to Performance Engineering are the ], or NFR. While a functional requirement relates to '''what''' business operations are to be performed, a performance related non-functional requirement will relate to '''how fast''' that business operation performs under defined circumstances. The type of requirements that relate to performance engineering are the non-functional requirements, or NFR. While a functional requirement relates to which business operations are to be performed, a performance related non-functional requirement will relate to how fast that business operation performs under defined circumstances.


===Construction=== ===Construction===
Early in this phase a number of performance tool related activities are required. These include: Early in this phase a number of performance tool related activities are required. These include:
*Identify key development team members as subject matter experts for the selected tools *Identify key development team members as subject matter experts for the selected tools.
*Specify a ] tool for the development/component unit test environment *Specify a ] tool for the development/component unit test environment.
*Specify an automated unit (component) performance test tool for the development/component unit test environment; this is used when no GUI yet exists to drive the components under development *Specify an automated unit (component) performance test tool for the development/component unit test environment; this is used when no GUI yet exists to drive the components under development.
*Specify an automated tool for driving server-side unit (components) for the development/component unit test environment *Specify an automated tool for driving server-side unit (components) for the development/component unit test environment.
*Specify an automated multi-user capable script-driven end-to-end tool for the development/component unit test environment; this is used to execute screen-driven use cases *Specify an automated multi-user capable script-driven end-to-end tool for the development/component unit test environment; this is used to execute screen-driven use cases.
*Identify a database test data load tool for the development/component unit test environment; this is required to ensure that the database optimizer chooses correct execution paths and to enable reinitializing and reloading the database as needed *Identify a database test data load tool for the development/component unit test environment; this is required to ensure that the database optimizer chooses correct execution paths and to enable reinitializing and reloading the database as needed.
*Deploy the performance tools for the development team *Deploy the performance tools for the development team.
*Presentations and training must be given to development team members on the selected tools *Presentations and training must be given to development team members on the selected tools.


The performance test team normally does not execute performance tests in the development environment, but rather in a specialized pre-deployment environment that is configured to be as close as possible to the planned production environment. This team will execute ] against ]s, validating that the critical use cases conform to the specified non-functional requirements. The team will execute ] against a normally expected (median) load as well as a peak load. They will often run ] that will identify the system bottlenecks. The data gathered, and the analysis, will be fed back to the group that does ]. Where necessary, the system will be tuned to bring nonconforming tests into conformance with the non-functional requirements.
A member of the performance engineering practice and the development technical team leads should work together to identify performance-oriented best practices for the development team. Ideally the development organization should already have a body of best practices, but often these do not include or emphasize those best practices that impact system performance.

The concept of application instrumentation should be introduced here with the participation of the IT Monitoring organization. Several vendor monitoring systems have performance capabilities, these normally operate at the operating system, network, and server levels; e.g. CPU utilization, memory utilization, disk I/O, and for J2EE servers the JVM performance including garbage collection.

But this type of monitoring does not permit the tracking of use case level performance. To reach this level of monitoring capability may require that the application itself be instrumented. Alternatively, a monitoring toolset that works at the switch level may be used. (Examples might be ]'s Cx technology, ]'s Foglight, ]'s RUM, ]'s SuperAgent, or ]'s agentless ClientVantage.) The monitoring group should have specified the requirements in a previous phase, and should work with the development team to ensure that use case level monitoring is built in.

The group responsible for infrastructural ] should have an established "base model" checklist to tune the operating systems, network, servers (application, web, database, load balancer, etc.), and any message queueing software. Then as the performance test team starts to gather data, they should commence tuning the environment more specifically for the system to be deployed. This requires the active support of subject matter experts, for example, database tuning normally requires a DBA who has special skills in that area.

The performance test team normally does not execute performance tests in the development environment, but rather in a specialized pre-deployment environment that is configured to be as close as possible to the planned production environment. This team will execute ] against ]s, validating that the critical use cases conform to the specified non-functional requirements. The team will execute ] against a normally expected (median) load as well as a peak load. They will often run ] that will identify the system bottlenecks. The data gathered, and the analyses, will be fed back to the group that does performance tuning. Where necessary, the system will be tuned to bring nonconforming tests into conformance with the non-functional requirements.


If performance engineering has been properly applied at each iteration and phase of the project to this point, hopefully this will be sufficient to enable the system to receive performance certification. However, if for some reason (perhaps proper performance engineering working practices were not applied) there are tests that cannot be tuned into compliance, then it will be necessary to return portions of the system to development for refactoring. In some cases the problem can be resolved with additional hardware, but adding more hardware leads quickly to diminishing returns. If performance engineering has been properly applied at each iteration and phase of the project to this point, hopefully this will be sufficient to enable the system to receive performance certification. However, if for some reason (perhaps proper performance engineering working practices were not applied) there are tests that cannot be tuned into compliance, then it will be necessary to return portions of the system to development for refactoring. In some cases the problem can be resolved with additional hardware, but adding more hardware leads quickly to diminishing returns.

For example: suppose we can improve 70% of a module by parallelizing it, and run on 4 CPUs instead of 1 CPU.
If α is the fraction of a calculation that is sequential, and (1-α) is the fraction that can be parallelized, then the maximum speedup that can be achieved by using P processors is given according to ]: <math>\frac{1}{\alpha+\frac{1-\alpha}{P}}</math>

In this example we would get: 1/(.3+(1-.3)/4)=2.105. So for quadrupling the processing power we only doubled the performance (from 1 to 2.105). And we are now well on the way to diminishing returns. If we go on to double the computing power again from 4 to 8 processors we get 1/(.3+(1-.3)/8)=2.581. So now by doubling the processing power again we only got a performance improvement of about one fifth (from 2.105 to 2.581).


===Transition=== ===Transition===
Line 64: Line 51:
*Configuring the operating systems, network, servers (application, web, database, load balancer, etc.), and any message queueing software according to the base checklists and the optimizations identified in the performance test environment *Configuring the operating systems, network, servers (application, web, database, load balancer, etc.), and any message queueing software according to the base checklists and the optimizations identified in the performance test environment
*Ensuring all performance monitoring software is deployed and configured *Ensuring all performance monitoring software is deployed and configured
*Running Statistics on the database after the production data load is completed *Running statistics on the database after the production data load is completed


Once the new system is deployed, ongoing operations pick up performance activities, including: Once the new system is deployed, ongoing operations pick up performance activities, including:
Line 84: Line 71:


==Monitoring== ==Monitoring==
To ensure that there is proper feedback validating that the system meets the NFR specified performance metrics, any major system needs a monitoring subsystem. The planning, design, installation, configuration, and control of the monitoring subsystem is specified by an appropriately defined Monitoring Process. To ensure that there is proper feedback validating that the system meets the NFR specified performance metrics, any major system needs a monitoring subsystem. The planning, design, installation, configuration, and control of the monitoring subsystem are specified by an appropriately defined monitoring process.
The benefits are as follows: The benefits are as follows:
#It is possible to establish service level agreements at the use case level. * It is possible to establish service level agreements at the use case level.
#It is possible to turn on and turn off monitoring at periodic points or to support problem resolution. * It is possible to turn on and turn off monitoring at periodic points or to support problem resolution.
#It enables the generation of regular reports. * It enables the generation of regular reports.
#It enables the ability to track trends over time such as the impact of increasing user loads and growing data sets on use case level performance. * It enables the ability to track trends over time, such as the impact of increasing user loads and growing data sets on use case level performance.


The trend analysis component of this cannot be undervalued. This functionality, properly implemented, will enable predicting when a given application undergoing gradually increasing user loads and growing data sets will exceed the specified non functional performance requirements for a given use case. This permits proper management budgeting, acquisition of, and deployment of the required resources to keep the system running within the parameters of the non functional performance requirements. The trend analysis component of this cannot be undervalued. This functionality, properly implemented, will enable predicting when a given application undergoing gradually increasing user loads and growing data sets will exceed the specified non functional performance requirements for a given use case. This permits proper management budgeting, acquisition of, and deployment of the required resources to keep the system running within the parameters of the non functional performance requirements.


==See also== ==See also==
* (ITIL)
*] *]
*]
*
*]
*] *]
*]
*]
*] *]
*] *]
*]
*]
*] *]


Line 113: Line 92:


==Further reading== ==Further reading==
*
* *
* *
* *
* *
* *
* *
* *
Line 126: Line 106:
* *
* *
* * {{Webarchive|url=https://web.archive.org/web/20100504044406/http://blogs.technet.com/winserverperformance/ |date=2010-05-04 }}
* *
* *
* *
*


{{Systems Engineering}} {{Systems Engineering}}


{{DEFAULTSORT:Performance Engineering}} {{DEFAULTSORT:Performance Engineering}}
] ]
]
] ]

Latest revision as of 16:28, 15 September 2022

Encompasses the techniques applied during a systems development life cycle
This article is written like a research paper or scientific journal. Please help improve the article by rewriting it in encyclopedic style and simplify overly technical phrases. (November 2016) (Learn how and when to remove this message)

Performance engineering encompasses the techniques applied during a systems development life cycle to ensure the non-functional requirements for performance (such as throughput, latency, or memory usage) will be met. It may be alternatively referred to as systems performance engineering within systems engineering, and software performance engineering or application performance engineering within software engineering.

As the connection between application success and business success continues to gain recognition, particularly in the mobile space, application performance engineering has taken on a preventive and perfective role within the software development life cycle. As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.

The term performance engineering encompasses more than just the software and supporting infrastructure, and as such the term performance engineering is preferable from a macro view. Adherence to the non-functional requirements is also validated post-deployment by monitoring the production systems. This is part of IT service management (see also ITIL).

Performance engineering has become a separate discipline at a number of large corporations, with tasking separate but parallel to systems engineering. It is pervasive, involving people from multiple organizational units; but predominantly within the information technology organization.

Performance engineering objectives

  • Increase business revenue by ensuring the system can process transactions within the requisite timeframe
  • Eliminate system failure requiring scrapping and writing off the system development effort due to performance objective failure
  • Eliminate late system deployment due to performance issues
  • Eliminate avoidable system rework due to performance issues
  • Eliminate avoidable system tuning efforts
  • Avoid additional and unnecessary hardware acquisition costs
  • Reduce increased software maintenance costs due to performance problems in production
  • Reduce increased software maintenance costs due to software impacted by ad hoc performance fixes
  • Reduce additional operational overhead for handling system issues due to performance problems
  • Identify future bottlenecks by simulation over prototype
  • Increase server capability

Performance engineering approach

Because this discipline is applied within multiple methodologies, the following activities will occur within differently specified phases. However, if the phases of the rational unified process (RUP) are used as a framework, then the activities will occur as follows:

During the first, Conceptual phase of a program or project, critical business processes are identified. Typically they are classified as critical based upon revenue value, cost savings, or other assigned business value. This classification is done by the business unit, not the IT organization. High level risks that may impact system performance are identified and described at this time. An example might be known performance risks for a particular vendor system. Finally, performance activities, roles and deliverables are identified for the Elaboration phase. Activities and resource loading are incorporated into the Elaboration phase project plans.

Elaboration

During this defining phase, the critical business processes are decomposed to critical use cases. Probe cases will be decomposed further, as needed, to single page (screen) transitions. These are the use cases that will be subjected to script driven performance testing.

The type of requirements that relate to performance engineering are the non-functional requirements, or NFR. While a functional requirement relates to which business operations are to be performed, a performance related non-functional requirement will relate to how fast that business operation performs under defined circumstances.

Construction

Early in this phase a number of performance tool related activities are required. These include:

  • Identify key development team members as subject matter experts for the selected tools.
  • Specify a profiling tool for the development/component unit test environment.
  • Specify an automated unit (component) performance test tool for the development/component unit test environment; this is used when no GUI yet exists to drive the components under development.
  • Specify an automated tool for driving server-side unit (components) for the development/component unit test environment.
  • Specify an automated multi-user capable script-driven end-to-end tool for the development/component unit test environment; this is used to execute screen-driven use cases.
  • Identify a database test data load tool for the development/component unit test environment; this is required to ensure that the database optimizer chooses correct execution paths and to enable reinitializing and reloading the database as needed.
  • Deploy the performance tools for the development team.
  • Presentations and training must be given to development team members on the selected tools.

The performance test team normally does not execute performance tests in the development environment, but rather in a specialized pre-deployment environment that is configured to be as close as possible to the planned production environment. This team will execute performance testing against test cases, validating that the critical use cases conform to the specified non-functional requirements. The team will execute load testing against a normally expected (median) load as well as a peak load. They will often run stress tests that will identify the system bottlenecks. The data gathered, and the analysis, will be fed back to the group that does performance tuning. Where necessary, the system will be tuned to bring nonconforming tests into conformance with the non-functional requirements.

If performance engineering has been properly applied at each iteration and phase of the project to this point, hopefully this will be sufficient to enable the system to receive performance certification. However, if for some reason (perhaps proper performance engineering working practices were not applied) there are tests that cannot be tuned into compliance, then it will be necessary to return portions of the system to development for refactoring. In some cases the problem can be resolved with additional hardware, but adding more hardware leads quickly to diminishing returns.

Transition

During this final phase the system is deployed to the production environment. A number of preparatory steps are required. These include:

  • Configuring the operating systems, network, servers (application, web, database, load balancer, etc.), and any message queueing software according to the base checklists and the optimizations identified in the performance test environment
  • Ensuring all performance monitoring software is deployed and configured
  • Running statistics on the database after the production data load is completed

Once the new system is deployed, ongoing operations pick up performance activities, including:

  • Validating that weekly and monthly performance reports indicate that critical use cases perform within the specified non functional requirement criteria
  • Where use cases are falling outside of NFR criteria, submit defects
  • Identify projected trends from monthly and quarterly reports, and on a quarterly basis, execute capacity planning management activities

Service management

In the operational domain (post production deployment) performance engineering focuses primarily within three areas: service level management, capacity management, and problem management.

Service level management

In the service level management area, performance engineering is concerned with service level agreements and the associated systems monitoring that serves to validate service level compliance, detect problems, and identify trends. For example, when real user monitoring is deployed it is possible to ensure that user transactions are being executed in conformance with specified non-functional requirements. Transaction response time is logged in a database such that queries and reports can be run against the data. This permits trend analysis that can be useful for capacity management. When user transactions fall out of band, the events should generate alerts so that attention may be applied to the situation.

Capacity management

For capacity management, performance engineering focuses on ensuring that the systems will remain within performance compliance. This means executing trend analysis on historical monitoring generated data, such that the future time of non compliance is predictable. For example, if a system is showing a trend of slowing transaction processing (which might be due to growing data set sizes, or increasing numbers of concurrent users, or other factors) then at some point the system will no longer meet the criteria specified within the service level agreements. Capacity management is charged with ensuring that additional capacity is added in advance of that point (additional CPUs, more memory, new database indexing, et cetera) so that the trend lines are reset and the system will remain within the specified performance range.

Problem management

Within the problem management domain, the performance engineering practices are focused on resolving the root cause of performance related problems. These typically involve system tuning, changing operating system or device parameters, or even refactoring the application software to resolve poor performance due to poor design or bad coding practices.

Monitoring

To ensure that there is proper feedback validating that the system meets the NFR specified performance metrics, any major system needs a monitoring subsystem. The planning, design, installation, configuration, and control of the monitoring subsystem are specified by an appropriately defined monitoring process. The benefits are as follows:

  • It is possible to establish service level agreements at the use case level.
  • It is possible to turn on and turn off monitoring at periodic points or to support problem resolution.
  • It enables the generation of regular reports.
  • It enables the ability to track trends over time, such as the impact of increasing user loads and growing data sets on use case level performance.

The trend analysis component of this cannot be undervalued. This functionality, properly implemented, will enable predicting when a given application undergoing gradually increasing user loads and growing data sets will exceed the specified non functional performance requirements for a given use case. This permits proper management budgeting, acquisition of, and deployment of the required resources to keep the system running within the parameters of the non functional performance requirements.

See also

References

  1. "Banking Industry Lessons Learned in Outsourcing Testing Services," Gartner. August 2, 2012.
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Performance engineering" – news · newspapers · books · scholar · JSTOR (March 2009) (Learn how and when to remove this message)

Further reading

Systems engineering
Subfields
Processes
Concepts
Tools
People
Related fields
Categories: