BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Article: Iterative, Automated and Continuous Performance

Article: Iterative, Automated and Continuous Performance

Bookmarks
Iterative and continuous are terms that are often used in reference to testing of software.  This new InfoQ article takes a look at whether the same concepts can be applied to performance tuning.  Along the way topics such as tooling and mocks are discuss in regards to how they need to be adjusted for performance in respect to testing for functional requirements.

Our industry has learned that if we deliver intermediate results and revisit functional requirements we can avoid delivering the wrong system late. We have learned that if we perform unit and functional tests on a regular basis, we will deliver systems with fewer bugs. And though we are concerned with the performance of our applications, we rarely test for performance until the application is nearly complete. Can the lessons of iterative, automated and continuous that we've applied to functional testing apply to performance as well?

Today, we may argue that a build that completes with unit testing should be performed on an hourly, daily, or weekly basis. We may argue on 100% coverage vs. 50% coverage. We may argue and discuss and ponder about specific details of the process. But, we all pretty much agree that performing automated builds completed with unit testing on a regularly scheduled basis, is a best practice. Yet, our arguments regarding performance testing tend to be limited to, don't...

Iterative, Automated, and Continuous Performance

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Just another acceptance test

    by Kurt Christensen,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    On the teams I work with, we try to author our stories such that non-functional (e.g., performance and security) needs are addressed right alongside the functional needs. This seems to me the best way; I knew of a large team that built a fancy go-faster DAO layer, with all the tests running against a mocked database, only to find out at the end that the DAO layer choked on realistic data sets. Oops. By then it was too late.

    I will admit, however, that incorporating this in stories is hard to do with teams that are struggling just to build the bare minimum of functionality. In that case, I usually wait until the team gets into a healthier groove, and then begin helping them to write non-functional acceptance tests.

  • Re: Just another acceptance test

    by Alex Popescu,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Interesting approach. Can you provide some examples on how these performance stories looks like?

    tia,

    ./alex
    --
    .w( the_mindstorm )p.

    Alexandru Popescu
    Senior Software Eng.
    InfoQ TechLead&CoFounder

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT