BT
x Your opinion matters! Please fill in the InfoQ Survey about your reading habits!

Article: Iterative, Automated and Continuous Performance

by Scott Delap on Nov 21, 2007 |
Iterative and continuous are terms that are often used in reference to testing of software.  This new InfoQ article takes a look at whether the same concepts can be applied to performance tuning.  Along the way topics such as tooling and mocks are discuss in regards to how they need to be adjusted for performance in respect to testing for functional requirements.

Our industry has learned that if we deliver intermediate results and revisit functional requirements we can avoid delivering the wrong system late. We have learned that if we perform unit and functional tests on a regular basis, we will deliver systems with fewer bugs. And though we are concerned with the performance of our applications, we rarely test for performance until the application is nearly complete. Can the lessons of iterative, automated and continuous that we've applied to functional testing apply to performance as well?

Today, we may argue that a build that completes with unit testing should be performed on an hourly, daily, or weekly basis. We may argue on 100% coverage vs. 50% coverage. We may argue and discuss and ponder about specific details of the process. But, we all pretty much agree that performing automated builds completed with unit testing on a regularly scheduled basis, is a best practice. Yet, our arguments regarding performance testing tend to be limited to, don't...

Iterative, Automated, and Continuous Performance

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Just another acceptance test by Kurt Christensen

On the teams I work with, we try to author our stories such that non-functional (e.g., performance and security) needs are addressed right alongside the functional needs. This seems to me the best way; I knew of a large team that built a fancy go-faster DAO layer, with all the tests running against a mocked database, only to find out at the end that the DAO layer choked on realistic data sets. Oops. By then it was too late.

I will admit, however, that incorporating this in stories is hard to do with teams that are struggling just to build the bare minimum of functionality. In that case, I usually wait until the team gets into a healthier groove, and then begin helping them to write non-functional acceptance tests.

Re: Just another acceptance test by Alex Popescu

Interesting approach. Can you provide some examples on how these performance stories looks like?

tia,

./alex
--
.w( the_mindstorm )p.

Alexandru Popescu
Senior Software Eng.
InfoQ TechLead&CoFounder

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

2 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT