Bio Cyndi Mitchell is the Managing Director of ThoughtWorks Studios. She joined ThoughtWorks in November 2002, and has served the company in a variety of roles including development, consulting and management. Over the past 13 years of her career, Cyndi has been helping businesses use software to innovate and create value. Prior to ThoughtWorks, Cyndi was a Senior Architect with Sun Microsystems.
The Agile 2010 conference is created by a production team of highly respected Agile experts and practitioners to present a program that spans the whole spectrum of agile practice.The Agile conference series is a organized as a program of the Agile Alliance, a non-profit organization dedicated to uncovering better ways of developing software inspired by the values and principles of the Manifesto for Agile Software Development.
Aside from just coming to the conference to connect with like-minded people who are looking to improve the state-of-the-art of software development, we are here this week announcing a number of things under a concept that we call continuous delivery. Most recently Jez Humble from ThoughtWorks has authored a book called "Continuous Delivery" along with a gentleman called Dave Farley, who is also an ex-ThoughtWork-er, to address what we call the last mile of software development. What happens to software after it gets created and functionally complete, ready for the business but how does it actually get live into a production environment?
We're looking at that last mile and trying to shrink it down to last centimeters, a very short period of time. Underneath the umbrella of continuous delivery we've got a new book and we launched a new product called "Go" in July which supports continuous delivery process. Then, at the show this week at Agile 2010 we have announced the creation of a new practice inside ThoughtWorks, which is Build and Release management and DevOps practice - A practice that addresses and focuses on improvement within this last mile of software development.
Along with that we've launched an online assessment to help companies get a sense of where they are - in a quick 10 minutes with a survey of what their next planning and road mapping steps might be for improving their Agile release management, their last mile processes. That's why we are here this week.
2. We actually talked with Jez earlier this week about his book and about Continuous Delivery. I'll ask you a similar question to what I asked him. In the beginning, the idea of Agile in the original premise of XP was to deliver working software and deploy every two weeks. Why is it that 10 years later we're finally getting around to actually trying to do that?
I think the smaller organizations have delivered on that, but the reality is when you take these XP practices or any of them, to a larger enterprise scale, when you are talking about hundreds or thousands of developers and software practitioners working, building software and a legacy of 10 or 20 or 30 years of software development that's happened already, it obviously becomes much more complicated. You start putting these practices on top of this larger, much more complex environment and the story changes a little bit.
Anyone who's had a go at introducing Agile practices, whether they are planning or collaboration practices, the Scrum type stuff or the engineering and XP style practices into a large enterprise organization realizes it's a complex problem. It's a cultural change. At the root of all this stuff lies a cultural change. It's difficult for organizations, the large organizations that have been around for a while, to change. That's I think actually why we haven't achieved the Nirvana state, just yet.
3. I had a friend once that was very well paid to basically do deployments. He would travel all over the country to bring a job to a company and sit and watch a computer install software. Are your tools and is the Agile community getting better with the deployment issue, deployment questions?
I think so. Continuous integration introduced or planted the seed of the art of the possible with introducing change into a code base and understanding the impact of that change. I think continuous integration is kind of the root of what we've been able to achieve with deployment and continuous deployment. We are getting better. It's a combination of the engineering practices that we got through XP, continuous integration, TDD, refactoring, pairing. These are practices which help us create code bases which are easy to change. They keep things simple, they make things testable; if it's too hard to test, it's too complex.
That is what the engineering practices of XP gave us – the ability to change software quickly. That's half of the battle. I introduce a change and I can understand what I did. Obviously, when you start looking at changing the software and then getting it live, those practices are still very helpful and interesting. The continuous integration is what informs the continuous deployment. How do I apply that? Not just to unit tests or testing units of code that I've changed, but to functional validation, complete user scenarios in the functional testing.
How do I apply it to performance testing, security testing and all the operational things that need to be validated - compliance, auditing, managing and logging to maintain the system? Take those simple concepts of, for example, continuous integration and heavy automation and apply them to the next steps of validation that you know this software is going to need to go through to get through to the production environment. We got a lot out of the Agile and the XP approach which we can apply to the whole last mile lifecycle. We also can learn a lot from the collaboration practices that come out of the Agile movement. You asked about why is it still so difficult.
Most large enterprises still have siloed types of organization. They have business people, they have the developers, they have the centralized quality groups and they have operations groups that will manage and run it. There is a division between design time and runtime within most organizations. What we're taking from Agile and what continuous delivery is all about is we're taking the requirements from those silos that get left towards the end of centralized QA in the operations group and bringing them to the front of the software development lifecycle, making them a first class citizen.
The same as we made unit testing a first class citizen in the Agile software development lifecycle, making performance testing and security testing and all those operational validations first class citizens. Let's uncover and discuss those upfront in the project and let's build the platform and let's continuously improve on the way that we validate and validate those requirements from the beginning of the software development lifecycle as a first class citizen in the same way that we validate that it's functionally complete or that units actually work.
We are getting better and we've learned a lot from the last decade of Agile software development and all the collaboration and the engineering practices that came from XP and from Scrum and all the other methods around in Agile help inform the kinds of behaviors and practices that we need to introduce into the last mile and to bring it all forward and make it a first class citizen. We're getting better! It's just that when you do this stuff in the large with a large organization who's got 20-30 years of legacy behind them, things are complicated. As I said, it is a cultural shift that an organization needs to make and culture is hard.
4. The deployment environment itself is incredibly complicated. You have a variety of devices from telephone or from cell phones to desktops. You have different operating systems; you have different versions with different patches of these operating systems, a lot of complications. Did your tool and toolset actually help you deal with those kinds of complication? Does it feed it back so that the developers when they are doing testing have some clue about "Well, you're going to deploy on 40 different devices"?
You are asking the managing director of a software business, so we have these problems every day in our organization. We need our software to run on different types of platforms in different browsers in different databases and those sorts of things. We have a pretty complex set of environments that we need to validate our software and to make sure it's ready to go in the hands of our customers. We dogfood all of the software that we create, so we go within a platform that we recently launched and we use it to validate our software across all these different environments.
When we talk about last mile in a large enterprise, there are cultural problems, there are organizational problems, and there are technical problems. One of the key problems of the last mile is actually finding and marshaling environments and resources to do the performance testing validation, to do the security testing. How do you actually spin up quickly new environments to validate software and how to make sure that those practical environments actually look like the production environment?
And that they are not manually hacked together so that they can't actually be created, but they are built up and torn down in an automated way every time. This is one of the major problems that Go, our platform, uniquely solves in this space as the automated building and managing of the environments in which you need to validate your software. It was one of the key problems that we wanted to solve because we saw it as a major challenge in most of the enterprises that we work in - the building and acquiring and building environments.
Obviously, Cloud and virtualization, these technologies have a huge role to play in allowing or in helping organizations to quickly spin up environments to do some validations and tear them back down. So there is a real value to this elastic nature of the Cloud in this world of doing continuous delivery and continuous deployments. We see a lot of organizations making use of that, including ThoughtWorks Studios.
5. Your current is focused on kind of the delivery end of the lifecycle management system. Let's go back to the front end and the talk about user stories and requirements, user stories were originally intended to be something much more like a designer's brief. A relatively nebulous statement of what is wanted in general. That clearly identified any absolute constraints, so that the designer had space to explore for a solution. The Agile community over the years has kind of drifted away from that and they are using user stories as just an old 1970s requirement or feature statement. Is it possible to have a tool or build something to the front end of your existing tools that would actually support the original intent of stories and keeping track of ideas and the flow of prototypes?
It depends. There are so many different takes on Agile analysis, obviously and what goes into a story. I guess at the end of the day the story is a promise for a conversation. With most of the environments that we work in you have some high level concepts of what you try to achieve and then iteratively, over time, you're decomposing these concepts into stories that have hopefully some business value attached to them and then potentially some tasks or anything else that might be done. There are some risks associated to or whatever else.
If you don't understand your process and how are going to work and what the best way is to break down your business concepts and decompose these things and do your analysis, I don't think a tool has any place in your world, yet. You need to understand and know what you want to do. You know your business hopefully or someone in your team knows your business and knows what needs to get done and that group of people need to come together and figure out how they want to break that down into what kinds of concepts and how do they want to do their prototyping and their user testing and how they want to evolve their requirements as they go.
We certainly have in our solution views and ways of working that support collaboration between business stakeholders and Agile teams to be able to do the high level cards laid on the table type planning and moving features and moving concepts and ideas around. We have something which we call story trees, which allows you to organize hierarchically information. That's the highest level of concept and decomposition of the concept into something that could actually be worked on with some kind of acceptance criteria attached to it.
Our solution, Mingle, in addition to supporting this hierarchical management and reorganization of information, it's a wiki-based solution. So, you can put anything that you want into a card – you can put a prototypes and diagrams and charts and data and all kind of things and references amongst different cards in the system, dependencies and those sorts of things. We're taking a very flexible approach with what we do. One of the key reasons why we set up ThoughtWorks Studios and began to create tools for these phases because we felt what was on the market was not adaptive enough to support the needs of an Agile team.
Every business is different, every project is different, every team is different and they need to discover and learn their process as they go. That's the whole point of shifting from this predictive world to this Agile world. The teams are going to adapt to what's going on in their environment and discover and learn their process and the tool has to adapt with them, has to go with them on that journey. That's actually the reason why we set up ThoughtWorks Studios, because we felt that the tools that were there were not adaptive enough.
Back to the question of requirements, capture and analysis and decomposition, we provided some very flexible support for different modes of hierarchy, dependencies and relationships amongst concepts, stories, cards and the ability to really have a complete - as structured or unstructured as you'd like it to be in terms of how you do your narratives and how you author your acceptance criteria. We also have within our Adaptive ALM, in addition to Mingle, which addresses the requirements analysis and tracking area quite well, a tool called Twist, which is an automated functional testing tool, I guess, by category.
But it addresses the problem of authoring scenarios, business scenarios that the business requirements, the code that fulfills those requirements and the tests and allowing these three parties – the BA, the QA and the developer to work together to author these three different artifacts and keep them in sync over the life of the system. Most Agile teams get to a place where they have large functional testing suites that they can't maintain. So, eventually you build up thousands of tests over time doing some good TDD and automated functional testing practice and quite quickly the tests become very difficult to maintain. You start changing a system, you move a button from here to there, a field from here to there and 200 break and you go back and fix them all.
Sooner or later you end up with this place where you spend more time fixing the tests because you've made some legitimate changes than you actually do testing the software. We started with that. How do we create test suites that we can evolve? How do we evolve test suites? And then we worked backward from that problem to create this automated functional testing tool called Twist, which also addresses this area of requirements and definition of acceptance criteria and tests and keeping those things in sync over time.
6. A number of years ago, Peter Naur wrote a very famous paper called "Software Development as Theory Building" [Actually, "Programming as Theory Building"]. And one of the things that he pointed out in that paper was that most of the real knowledge about any given project is tacitly held in the heads of the programmers. Then you disband the programming team, that knowledge goes away, the theory goes away and you bring in somebody else. And they either totally see or have to recreate the theory and they go to their boss and say "This is garbage. It will be cheaper to rewrite it." Have the tools and so on improved that situation or do we still find ourselves with a huge knowledge in the heads of the developers basically disappears when the team is disbanded?
I'm never going to sit here and tell you that the tools are the solution to all these problems. I definitely believe this is much more about people and culture. I do think, though, that this is an area where the right tools can help and I'll explain what I mean by that. The right tools in an Agile world have some key and core pieces. The first of these I mentioned already is adaptive. The whole point of doing this Agile software development stuff, the whole essence of this shift is we stop trying to predict things.
We stop trying to guess what's going to happen in the future and predict upfront and we now instead move to this world where we accept we can't predict. We set ourselves up to adapt. We adapt to the changes that are happening around. I mean the business changes, the business requirements and technical constraints that might appear. We continuously learn and discover our process as we go. We accept that what we know on day 21 is not what we'll know on day 201 or 2001. We set ourselves up; we work in a way that adapts.
We structure our work in increments, we do iterations, we have stand-ups, we share and collaborate and we put some good practice to accept that change. We work in an adaptive way and our tools must be adaptive. If our tools are not adaptive, then the teams will stop using the tools and this is, like I said, the reason why we set up Studios. We were working in a lot of big enterprise environments and being asked to use tools that were not adaptive, that did not go with us on the journey as we discovered with the client how we were going to work.
But the managers still wanted you to use the tools because that's how they were tracking the data. At the end of the week, you put your data into the tool. If it was Friday, you got to put your actuals in so they can do the status reporting on Monday. Guess what? The data that's in the tool doesn't reflect reality, it's stuff you just entered because you were asked to enter it in the tool but it didn't reflect what you were doing for the last week. Our philosophy in what we have created is tools that adapt to the way that teams work, so the teams will actually use the tools to get their day-to-day work done and they make them more productive.
Because they use our tools, because they make them more productive, we actually capture all the data about what they are doing as they go. That means that at any given time, in real time, whenever you want it, you can actually get reports on real information on what's going on in the project environment. Tools have a big part to play in this. If they adapt to the way that the Agile team is working, they'll actually use the tools and they will capture what they are doing. You become this kind of system of record for what's happening in the software environment.
Then I will add to that that, if the tool set that you choose covers the holistic set of Agile practices, so not just collaboration and planning but also the engineering practice, this is where you really get to the true adaptive dream of Agile. It's good to set yourself up to adapt to changes in the environment – that's important and that helps – but it doesn't matter how adaptive you are, how many stand-ups you have or what else, if you can't actually inject the changes into the software. If the software that you are creating is not adaptive, it doesn't matter how much business change you accept.
You can't get the change into the software anyway. So that's the other key piece. Certainly, for us the key theme and philosophy behind what we're doing is around taking a very holistic approach to Agile adoption and having a complete set of tools that covers the entire set of practices – planning, collaboration and engineering practices. Again, because this complete set of tools is adaptive, the teams will actually use them to get their work done, which then means that all the data that gets created as they are working (every check-in, every comment, every build, anything that happens within the Agile software development lifecycle) is captured as a matter of course as data in this single system of record.
Then it's just simply a matter of reporting on top of that. I think the other place where the tools can support is in facilitating collaboration. Many organizations have distributed non-collocated software development initiatives, whether within country or on-shore or off-shore. Obviously, the sharing can happen with the tool. The collaboration that that can facilitate is very helpful. We all look at the same cardwall regardless of whether we're in Brazil or in Chicago. There is the structure collaboration that we can share, the cardwalls and the views, but then there is also the unstructured collaboration that happens around any given project.
The conversations they have, the e-mails they send to each other about card no. 53, the IM conversations they have about "Have you had a look at card 102 yet?" and being able to capture this kind of less unstructured collaboration that happens around any given project and relating it back to artifacts within the application lifecycle management suite. A conversation which was had over here about a specific card, having that show-up in line in any context with the artifact, with card no. 102 when it's being looked at in the system. "Oh, there was this IM conversation they had about this."
Having all that data and all that information there creates this system of record. Then it does give you a way to pass on the tacit knowledge that builds up around the initiative that runs many months or many years. I do think tools can help some, but again I go back to if you don't have a culture of discovery and learning and collaboration and sharing, then none of these things matter.
With ThoughtWorks Studios we are actually in a very privileged position because our parent company is ThoughtWorks, which is a 17 year-old custom software solution expert. ThoughtWorks is the company that a lot of organizations call when they have something big and important, mission critical, a new business initiative or any business model they are trying to launch and there is a heavy software element to it. We were some of the early pioneers of Agile practices in the enterprise space. As an organization, we are 1600 people worldwide in eight different countries and we do a lot of software projects in a year in the enterprise.
So, we have a lot of real world, hands-on, in-depth information about what's needed to better support enterprise software development. That is a steady stream of information and ideas which feeds our product pipeline. We've been doing software development for a long time now at ThoughtWorks and we have some pretty strong opinions about good ways of building software. A lot of that is, of course, reflected in our Adaptive ALM suite and those ideas come from the real world and our clients and our projects.
Yes. We are about half in Bangalore and half in San Francisco and we've got a number of folks around the world in field roles supporting customers and helping our clients.
We definitely have a lot of battle scars, without a doubt. I think ThoughtWorks were some of the early folks that started out the distributed Agile stuff. So we've made a lot of mistakes and had a lot of learnings along the way. I do think we've gotten pretty good at it. Within ThoughtWorks Studios specifically with our role, we've had quite a number of distributions over our four year life. We were distributed across Australia and China for a period of time and then China and Bangalore and Australia, so we had a number of different incarnations.
I think we're pretty much settled with where we are now and feeling really good about it. I think that probably the key learning for me really, and I think for a lot of our clients these days, really is "Don't underestimate and pay good attention to the wear and tear that distributed development places on your people." Even the most passionate and dedicated people can get fatigued with needing to be on the phone every night at 7 o'clock, 8 o'clock 10 o'clock or up early at 5.30 in the morning. For me, the biggest learning was just seeing the impact of fatigue and seeing it aside from the human's angle of just not wanting to ruin people's lives.
The follow-through cost to the business is much greater than most organizations imagine. We see a lot of our clients discovering this is well. That's why we have just opened an office in the last year in Brazil to start doing more of a near shore type approach with a lot of our North American clients to have a better time zone overlap so that everyone can have a decent life and do meaningful work while they are there. No one wants to spend the first hour of their day catching up with what happened on the other side of the world and the last hour of their day making sure that everything is in place so that when the folks on the other side of the world wake up, they can see what happened in the last hour of the day.
That drag on a business in terms of productivity is not to be underestimated, you have to pay good attention to it and put practices in place to make sure that you are mitigating for those kinds of things.
ThoughtWorks is 600, almost 700 people in India now, so we've got quite a large talent base there. There is a lot of software development that happens in India and a lot of innovation inside ThoughtWorks particularly in the spaces of testing and continuous deployment and continuous delivery happens already in our new groups. For us, it's largely a talent and a perspective on software development that I won't say it's unique to India, but it's certainly very prevalent there, very present in our organization. That's a large part of our decision to be configured the way that we are.
ThoughtWorks is a transnational node networked organization, so a very global firm. We are in eight countries and we are spread all over the world. We have at any given time some 20% of ThoughtWork-ers working somewhere else on international assignments. We're a very globally distributed organization, almost kind of without headquarters. So, it's kind of a starting point that there's going to be some angle of global distribution in just about anything that we do. It works out also for our clients, obviously. From a services perspective we are in the right place at the right time.
That changes over time for our clients, where the right place is for a development team to be. Maybe it's collocated for a period of time and then maybe for whatever reason business has a need to shift to some other location. We're pretty Agile and nimble in the way that we are able to deal with our clients and service our clients because we are such a global distributed organization.
11. Boeing years ago when they were building the 737, they had obviously a worldwide workforce and they found it economically justified to bring absolutely everybody to Seattle, at least once or twice during that project. It was at least twice, it might have been three times. Do you see value in this kind of face-to-face talk with your team? Does San Francisco ever go to India or does India ever come to San Francisco?
Yes, absolutely without a doubt. It doesn't surprise me that Boeing had that approach. We certainly do it as well and there is nothing like good old fashioned human interaction and we are still very primal I think in that way. We haven't yet been able to achieve getting the entire ThoughtWorks Studios group together in one location, but we have designs to do that at some point in our future. For a while we had the Mingle team distributed across Beijing and San Francisco and we had folks shifting at any given time. It was always a pair on the go from one location to the other.
We used to just move one person, for example someone from Beijing will come to spend three weeks in San Francisco, but then, at night that person's life wasn't that great because they were the only person there from Beijing. Everybody else was home with their family and they were the only person traveling. We started actually sending a pair at once so that was just better, so there was a social situation, a social life for the folks who were on the go all the time. We had the same thing, folks going from San Francisco and working in Beijing as well, so we just kept pairs in rotation to keep the face-to-face interaction going.
Then we did bring together the entire team in a single location as often as possible. I guess we ended up doing it once but the aim was to do it twice a year and then we ended up collocating the team. Face-to-face plays a huge role and it can't be underestimated. It's more fun, too. Software is social activity, thank goodness.
12. Now I want an honest answer, not a politically correct answer. I'm an educator, our university is preparing students, preparing graduates actually to become productive members of your work force and if not what might we do differently?
It's probably easier for me to answer that than maybe some larger organization. ThoughtWorks is 1600 people worldwide and we do intakes of graduates usually on a quarterly basis or at least twice a year. We actually seek outside the pool of traditional candidates - computer science, tech graduates or engineer graduates - and go for history and art and music majors. In fact some of our greatest success stories with bringing graduates in the business are people who are non-comp sci or any sort of science or math type, or traditional background to lead you to the field of software.
The key things we look for are: attitude, aptitude and integrity. They are all equally important, I guess, but interviewing for talent as opposed to interviewing for computing skills has been a successful strategy for us. That means of course that we had to make a significant investment. We have ThoughtWorks University in Bangalore. So, anyone who joins ThoughtWorks as a graduate will spend six weeks back in the ThoughtWorks University in Bangalore and probably some time there working thereafter to learn the skills that cover an Agile analysis and approaches and engineering practices and those sorts of things.
We've had to put in place a pretty rigorous program to make sure that these kinds of graduates can be successful in our work environment. So far it's been an absolutely astounding approach. The one thing that I was just discussing with some folks earlier today is the graduates that we see coming out now are a lot more focused on meaningful work and making a difference and doing something that interests and engages them and maybe has an impact on society that you might have seen 5-10 years ago. Those are the people that we're really looking for at ThoughtWorks.
It's important for us to be successful in business to do the work, but also that we continue to stay focused on improving the state of the art of software development, not for us but for the human race and for the world out there. And also continue to advocate for social impact and social and economic justice in the world. Those are the core missions of ThoughtWorks. We are looking for people for whom the mission really resonates. There are a lot of people, thankfully, out there in the world that fit that build and it's just a matter of finding the ones who are talented and intellectually curious and ready to work in an environment of continuous learning and discovery. The rest of it we can teach. That's been our experience so far.
All right. Thank you.
You should speak slowly
I like infoQ, it can bring us new ideas, methods and technologies. the website is not only for english country, it should works for all the world.
Frankly, if you are hiring art majors to develop software, why bother hiring graduates? You are really just hiring people based on whether you think they are smart and 'the right type of person'. Why not just hire them out of high-school based on their SATs and socioeconomic background? I guess this is why people say college is a waste of money. 'Spend four years (or more) studying something that you will never use and spend $100K+ in the process' Good advice, thanks.
That this is seen as acceptable and even as a smart approach speaks partly to the failure of computer science as a stand-in for a software engineering discipline (both are needed) but also to the extreme charlatanism that has become commonplace in consulting.