BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles New Book: Agile Software Engineering with Visual Studio

New Book: Agile Software Engineering with Visual Studio

Bookmarks

"Agile Software Engineering with Visual Studio - from Concept to Continuous Feedback" is a new book that provides a deep-dive into the Visual Studio-TFS features, which can help Agile teams manage their application lifecycle better. It is written by Sam Guckenheimer (Product Owner, Visual Studio Strategy at Microsoft) and Neno Loje (Independent ALM Consultant and TFS specialist).

InfoQ got in touch with Sam and Neno to explore the topics covered in the new book and discuss more about VS-TFS in general.

InfoQ: Agile normally focusses on minimal toolset (like spreadsheets) so that they don't spend a lot of time struggling with the tools, whereas Visual Studio + TFS is an end-to-end application lifecycle management system. Why do you think it's still a good fit for agile teams?

Sam Guckenheimer: Let me start with the specific question, and then answer more broadly. Excel is a spreadsheet, of course, and is also a primary client to Team Foundation Server exactly so that Agile teams can use the most familiar tools and gain the benefit of history, scale, and the full lifecycle that TFS and the rest of Visual Studio provide.

Your question implies that only beginning teams use Agile practices and that they don’t care about history, scale or the broad application lifecycle.  That’s not our experience, not consistent with reports from analysts such as Forrester and Gartner, and not what you hear in the experience reports at the Agile Conference and similar places. Agile is about self-managing, multi-disciplinary teams who strive to improve their flow of value with every sprint.

We made the startup experience 20 minutes with TFS 2010, and with Team Foundation Service (the hosted version on on Windows Azure) it’s about a minute, so there is no tool impediment to the team productivity.

 Neno Loje: If you want to look at (and manage) the whole application lifecycle of your product to find potential blockers or areas where your team can improve the flow and reduce waste, you need as much data as possible from that lifecycle in a central data store; and you certainly want this data collection to be automated, as much as possible.
On the tool-side, however, you can still use the tool that suits your needs. There's nothing wrong with a very simple spreadsheet, or a very simple task board on the wall - but you won't have much fun, if that data is stored "somewhere" in silos, it needs to be in your central store - and this is what Team foundation Server offers.

InfoQ: Do Agile efforts with really large software development projects benefit from better tooling?

Sam Guckenheimer: Absolutely. One of the primary impediments to effectively delivering value to customers is the waste generated by poorly integrated, siloed and mismatched tools.  Kent Beck describes some of this in his paper, Tools for Agility. The classic organizational response to this is to layer on manual process, which exacerbates the problem with more waste. You can only achieve agility at scale when you have trustworthy transparency, integration and automation across related projects and a seamless flow within the participating teams.

InfoQ: Would you recommend Visual Studio ALM for relatively small co-located agile teams?

Sam Guckenheimer: We target teams of 3 or more, consistent with Scrum, which defines an individual team size as 6±3.

Neno Loje: As somebody who has worked on and with many small teams (and there are many small teams out there) I learned that those have the same challenges to solve as larger teams, just less people and time to get those things done. Especially in that situation a team values the automation parts of TFS, because they don't have the time to work around tooling that doesn't suite their needs.

The question whether you want to introduce an ALM solution or not is really less a question about team size, but rather about what the teams needs are. In fact I had a team of two developers who convinced me that they just need an ALM solution like TFS by showing me what their current challenges were. Since they were so few people they had to be very cautions how to document changes to the codebase and maintain a crystal-clear tractability path for the companies they subcontracted for.

InfoQ: You've touched upon this issue of getting the definition of "done" right and the corresponding risk of technical debt. Could you elaborate further?

Sam Guckenheimer: Technical debt is the evil that prevents teams from delivering value. It oozes out in low quality, unpredictability, dysfunctional organizations, low morale, schedule delays, etc.

Again, we’re drawing on best learnings of Scrum and other Agile practices here.  The key extension with the Visual Studio product line, and Team Foundation Server in particular, is that we automate the Definitions of Done (DoD, sometimes called "done-done). In the past, the DoD was often handled with a checklist, which is a fine first step. As practices matured, things like Continuous Integration evolved, which is a level of automation of some of the DoD, specifically the unit tests associated with a build.

We think of this as one instance of a broader pattern, that there are many cycles at which DoD apply, and wherever possible, these steps need to be automated.  In that way, they can be transparent, trustworthy and low overhead. In our book, we tried to organize our chapters around the cycles and the DoD for each cycle.

InfoQ: About source control, TFS still does not have features that a lot of DVCSs have. This includes the pain of working offline. Do you think there are any improvements coming on this front in the future?

Sam Guckenheimer: We’ve radically improved the experience of working offline with lightweight branching in TFS 2010, but you’re right, it’s not full DVCS yet. We do continually improve the product, but it’s too early to make any announcements here.

InfoQ: The VS-TFS tool-chain is highly customizable, yet with so many options it could be overwhelming to choose anything other than the defaults. When would you recommend a team to start customizing things like the process template?

Sam Guckenheimer: People customize for primarily two (good) reasons:

  1. They have other systems with which they need to interoperate, either legacy or outside the scope of TFS.  For example, a lot of customers want to connect TFS to help desk/ticketing systems and map fields that track customer info. SIs have customer billing systems, etc.
  2. They have internal processes that they want to implement with TFS, as extensions to our standard process templates.

In general, we encourage customers to think twice before customizing. We want to allow and encourage process improvement, but discourage Brownian drift. We believe that the process templates represent good practice, and we’ve worked really closely with Scrum.org, for example, to implement a good Scrum template.

Neno Loje: Every customizable, complex system is dangerous in the sense of that you could probably spend endless time on thinking and actually customizing it. There are companies that tend to follow more formal way of work, that exactly do that and are quite happy with it. An agile Team, might just start to use TFS with one of the three ready-to-use process templates from Microsoft (Scrum, Agile and CMMI) and then - as part of their retrospective, where they discuss what could be improved in their process - decide if they want to make any changes (add or remove fields or rules) for example. It's a good idea to have a Process Improvement backlog that you can follow. If you have a small and clear goal in mind what you want to change, those customizations are usually easy to do. If you try to get everything right up front, that will be difficult, time-consuming, and at the end it will most probably create more waste than value. As Scrum says, the sprint starts right away - and you can still do adjustments along the way.

InfoQ: Automated Scenario/Load testing often needs high upfront investment, and requires a lot of maintenance whenever there are minor changes in the behavior. From your experience, are there any yardsticks that can help determine when it makes sense to go for it?

Sam Guckenheimer: That’s two great questions. The cost of maintaining test automation should never become an impediment to improving the software under test, so you need to design your software for testability (with patterns like MVVM) and your tests so that they are refactorable. Accordingly, we encourage:

  1. Initial low investment load testing to spike an architecture
  2. Through unit testing with code
  3. Initial exploratory testing
  4. Selective UI automation only when there will be high reuse through lots of configuration and regression testing

InfoQ - Scrum puts an extra emphasis on "potentially shippable" software at the end of every sprint. But with large development efforts especially with dependencies across several projects/products, is it actually feasible to do so?

Sam Guckenheimer: I think you’re asking, how do you manage dependencies across teams and what does that do to the sprint goal? Obviously, you adjust your sprint goals and schedule across teams to align, and TFS does track dependency relationships among PBIs. Where Team B depends on Team A, B may need to create a mock or fake for A’s service/component in order to achieve potentially shippable in Sprint N, and then replace the mock with the real integration in N+1. This happens all the time at scale, and it’s another reason to use TFS to track the combined backlog.

InfoQ - You have mentioned how VS can create an initial set of unit tests with high coverage with help of Pex. How exactly does it work?

Sam Guckenheimer: The short answer is that Pex uses code analysis to find uncovered code paths and generate unit tests that cover those paths. This is NOT intended to replace TDD, but is an alternative for cases such as legacy where you have inherited code without tests.  You can view details and try it here..

InfoQ - VS 2008 shipped with several SKUs including different ones for different roles like Test, Architect, etc. All that changed with VS 2010 which has only 3 SKUs - Professional, Premium and Ultimate. Does this have anything to do with the Scrum recommendation of everyone doing a bit of everything?

Sam Guckenheimer: Yes, sort of. In the interest of full disclosure, we still do have a separate VS Test Professional, whose functionality is included in VS Ultimate. We did collapse the notion of Architect and Developer SKUs - that separation was always a bad idea frankly, and one which I personally opposed, exactly because teams are multidisciplinary and the idea of discrete roles is quite obsolete. In the test case, we find that there are domain testers, who do not code, do not want the IDE as their UI, and for them we have a distinct user experience.

InfoQ - Intellitrace/Historical debugging is a pretty useful feature for developers in almost any project size; however it is present only in the Ultimate Edition of Visual Studio which is targeted towards larger project sizes. Any particular reason for this?

Sam Guckenheimer: Yes, we run as a business too. We  differentiate what functionality is available in which level of Visual Studio and our paying customers choose what to buy. It’s worth noting that startups are entitled to everything for three years free of charge under the BizSpark program and students/universities pay nothing under academic licenses.

InfoQ - One interesting feature in VS vNext is managing stakeholder feedback. Could you elaborate a bit more?

Sam Guckenheimer: The basic idea is that a PBI can have Feedback Work Items as children. There are flows for requesting feedback from stakeholders by email both on storyboards and new releases of working software.  The responses are then captured in relationship to the PBI . Because this is all managed in TFS, you know which PBI and which build of the software, so you have full history on both sides.

Neno Loje: I'm sure we all know the situation where software is delivery exactly as ordered by the customer, just to realize that the customer actually wanted something different. Delivering products frequently for the customer to inspect, mostly though Continuous Deployment, is a great first step in this direction (and TFS helps you automating this so it get's a no-brainer). The next step is to establish a close feedback loop with your customers, at various stages, to ensure the team is building the right thing and stays focused on customer value of the product delivered. You might call this Continuous Feedback.

InfoQ: We have seen early previews of TFS cloud coming out, something that you have covered briefly in your book. Are there any major changes expected with this move to the cloud?

Sam Guckenheimer: As I mentioned above, the startup experience is about a minute. Our goal with Team Foundation Service, is to create the most complete collaboration site for software teams.  We are updating the Service monthly, so yes.  Or first emphasis is to be sure that we can meet the operational SLA (99.9% uptime) at the scale of hundreds of thousands of users, and we are meeting that objective. You’ll see a series of announcements as roll out new functional capabilities there over the next year.

Neno Loje: The beautiful thing about the cloud service is that you obviously don't have to do with any of the infrastructure topics that you normally have to care about when installing and operating servers on your own. Especially for small teams is lowered the burden drastically. Just imagine you and me would now decide to work on a small project together - how would we do that? Using TFS in the cloud I can add virtually anybody to a project within minutes (or probably seconds) and work together not only on source code, but also share all work items, such as requirements lists, and the automated build results.

InfoQ: You took us through some of the problems Microsoft itself faced prior to the VS 2005 release and the seven changes that helped turned it around. How important was the tooling itself vis-a-vis an organizational push towards better practices?

Sam Guckenheimer: Tooling and change of practices went hand-in-hand. You need to do both, and create a virtuous cycle, and it needs to fit the situationally specific circumstance. For example, "Gated-Check-In" is a feature that we ship in TFS 2010 based on tooling we had to do to enable Continuous Integration/Continuous Delivery practices at our scale. Classic CI didn’t work, because Main was broken way too often, but we did need a way of always running tests against every checkin with the latest code, so we built the tooling to build and test BEFORE check-in, similar to DVCS, but with less latency in code movement.  This is a continual cycle - what works for us, we make available to customers, usually starting as Power Tool or Feature Pack. Since ~95% of our customers are on subscription, they get the new functionality right away.

InfoQ: And then you explained how temporary amnesia can set in after a success that could be quite disruptive; something that is quite common when individuals change jobs after a major milestone. Any practices you have put in place to avoid that in the future?

Sam Guckenheimer: Yes.  We’re trying to get much more explicit about what we think makes us successful, what gaps we have, and what learning and improvement we need to reinforce on which reasonable cycles.  We call this "Invest in Ourselves". For example, we do Scrum and Product Ownership training across the org, and we use it as a form of team building. Among engineering practices, we’ve made sure everyone relevant gets performance engineering and we’re looking at a DevOps course rollout, that we started across the leadership band. At the same time, just as we’ve gone to a more continuous product delivery, we’re trying to go to a more continuous org grooming. It’s too early to tell whether that’s succeeding, but early indicators are good. This is a big, shared leadership challenge. Having just read Kahneman, I think regression to the mean is the normal course of events.  We’re trying to make continuous improvement the alternate and true way of working.

About the Book Authors

Sam Guckenheimer is the author of Agile Software Engineering with Microsoft Visual Studio, from Concept to Continuous Feedback. Currently, Sam is the Product Owner for the Microsoft Visual Studio product line. In this capacity, he acts as the chief customer advocate, responsible for strategy of the next releases of these products. Prior to joining Microsoft in 2003, Sam was Director of Product Line Strategy at Rational Software Corporation, now the Rational Division of IBM. Sam lives in the Seattle area with his wife and three children in a sustainable house they built that has been described in articles in Metropolitan Home and Pacific Northwest magazine.

 

 

Neno Loje is the author of Agile Software Engineering with Microsoft Visual Studio, from Concept to Continuous Feedback. In his current role, he works as an independent Application Lifecycle Management (ALM) consultant and Team Foundation Server (TFS) specialist. In this role, he has helped many companies set up a team environment and software development process with Visual Studio (VS)/TFS.

 

 

 

Rate this Article

Adoption
Style

BT