BT

Pex White-box Test Generator Updated

by Al Tenhundfeld on Apr 27, 2009 |

Microsoft Research has released a new version of Pex, an automated white-box testing library for .NET. Pex v0.11 adds support for Delegates as Parameters, a new Exception Tree View, Stubbed Events, and Recursive Stubs. This release fixes an issue with incorrect registration of the Stubs Visual Studio Add-in, and it updates Pex to reference the latest Code Contracts release.

Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Pex performs a systematic analysis, hunting for boundary conditions, exceptions and assertion failures, which you can debug right away. Pex enables Parameterized Unit Testing, an extension of Unit Testing that reduces test maintenance costs.

Pex v0.11.40421.0 can be downloaded from MS Research. This release requires the OpenMP C++ runtime. This runtime is installed by default on a machine with Visual Studio or can be downloaded separately.

Delegate As Parameters: Pex now understands Parameterized Unit Tests that take delegates as parameters. Pex will automatically generate a delegate instance and its behavior: if the delegate returns a value, Pex will generate a new symbolic value for it, track its use, and then generate different values depending on the program behavior.

When executing Pex on Test, Pex will generate a Func which ‘asks’ Pex to choose the returned int (it uses PexChoose under the hood). Therefore, for each call to that delegate, Pex has the liberty to return a diferent value. Based on how it is used, Pex will generate different values to increase coverage. In this case, Pex ‘chooses’ to return 123 on the first call, which is exactly what we need to cover the exception branch.

Pex Explorer: Exception Tree View:

We have started to work on improving the experience when applying Pex to a large number of explorations. To this end, a new window called Pex Explorer will show various views over the events produced by Pex. The Exception Tree View provides the tree of exception types that Pex found. This is really helpful to quickly drill through the (really) bad exceptions first.

Pex Explorer: Contract Failures Tree View: If Code Contracts are being used, Pex also provides a specialized view to sort the contract failures kind.

Events in Stubs: Stubs now support events:the stub simply expose the backing delegate fields (which hold the event delegate)as a public fields. As a result, one can clear, set, and invoke the event as any other member.

Recursive Stubs: Another common feature of mock/stub frameworks is to support nested invocation of members. Stubs now lets you recursively invoke property getters. Instead of assigning the property getter delegate, you can use new helper methods ending in ‘AsStub’ that take care of allocating the nested stub, assigning it to the property getter and returning it.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Similar project for Java? by Thomas Mueller

I wonder if something like this is available for Java?

Mocking Frameworks Compare by Andrew Kazyrevich

As far as Stubs is actually a mocking framework, it might be of interest to some that there's an open source project that compares different syntaxes for mocking frameworks (Moq, Rhino Mocks, NMock2, Isolator and Stubs): code.google.com/p/mocking-frameworks-compare/

Re: Similar project for Java? by Al Tenhundfeld

I am not a Java developer and am not recommending this tool, but I've heard that Jtest has some similar functionality, for a price. From Parasoft's Jtest feature list:


  • Automatically creates sensitive low-noise regression test suites—even for large code bases

  • Automatically finds runtime bugs in execution paths that may cross multiple methods, classes, or packages

  • Generates functional JUnit test cases that capture actual code behavior as a deployed application is exercised

  • Generates extendable JUnit and Cactus (in-container) tests that expose reliability problems and capture behavior

Re: Mocking Frameworks Compare by Al Tenhundfeld

Andrew, thanks for posting that comparison. It is informative, though I disagree a bit with some of the language. In the "Con" section for Moq/Rhino Mocks/NMock2, it says, "Needs Dependency Injection to work."

I think the more appropriate language is, "Needs dependency substitution." It might seem a subtle difference, but, to me, the current languages makes it sound like a system needs to use a [Dependency Injection] framework to use any of those mocking frameworks. In reality, a system just needs some way to substitute dependencies under test; that could be done by using factories or a provider model. If you want to call that type of lookup [Dependency Injection], fine, but I think that's a little misleading to people who don't understand the technologies.

Also, in my testing, TypeMock Isolator seemed significantly slower. I don't have any firm numbers; so I could certainly be mistaken. Yes, performance in test code isn't the highest priority, but performance is still important as it affects how often the test suite gets run and how often developers context switch while waiting for tests to finish.

Re: Mocking Frameworks Compare by Andrew Kazyrevich

Al,

Thanks for checking that out. I have never thought that "needs Dependency Injection" could be misleading - but it seems that it could - do you think "needs Dependency Injection pattern" is more approapriate?

With regard to performance, that Mocking Frameworks Compare project also includes a performance comparison. (I also believe readability and maintainability are more important than performance, in this area of unit testing. However, performance comparisons could give some food for thought and point out some fast/slow solutions, and that kind of knowledge definitely can be reused in your own projects.)

So, this perf test contains some interesting numbers (say, Stubs is 1000x faster than Isolator and NMock2 is 2-3 times faster than Moq and Rhino).

Re: Mocking Frameworks Compare by Al Tenhundfeld

Doh, I see the performance comparison now. Yeah, those numbers are in line my anecdotal observations. Thanks for sharing that and for being a step ahead of me!

Yes, I do think Dependency Injection pattern would be a little clearer.

I need to look at the Stubs framework. Thanks.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

6 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT