Bio Chris McMahon is a software tester and former professional bass player and is currently the QA Lead at Wikimedia Foundation. Chris has been part of the software testing community since about 2004, both writing about the industry and contributing to open source projects like Watir, Selenium, and FreeBSD as well as testing systems deep and wide from mainframes to web applications.
Each year Agile Alliance brings together attendees, speakers, authors, luminaries, and industry analysts from around the world in a one-of-a-kind conference. The Conference is considered the premier Agile event of the year, and provides a week-long opportunity to engage in wide-open interaction, collaboration and sharing of ideas with peers and colleagues within the global Agile community.
1. Hi my name is Craig Smith, I’m an Agile editor at InfoQ and we are here at Agile 2013 in Nashville in Tennessee and it’s my great pleasure to be sitting here with Chris McMahon, how are you doing Chris?
I’m well, nice to see you.
Craig: Thanks for taking the time, you are the QA lead at the Wikimedia Foundation, right? That must be a very exciting job working in such a recognizable brand.
It is and it’s quite challenging, I founded the software testing QA practice a little over a year ago and it’s starting to really mature… I brought the talk here thought that it would be of interest to people in the Agile Community as well.
2. [...]How you go about testing both on the Wikipedia/ Wikimedia side of the equation and just some of the challenges that you have in an organization that is not only open but also it’s not as if the funding behind it is that you can have thousands of testers running behind the scenes?
Craig's full question: Absolutely, I made it along to your talk a couple of days ago and I guess that is why I really want to share with the viewers of this. Your talk was “Radically Open Software Testing” and it was an interesting insight into how you go about testing both on the Wikipedia/ Wikimedia side of the equation and just some of the challenges that you have in an organization that is not only open but also it’s not as if the funding behind it is that you can have thousands of testers running behind the scenes?
Yes, it is. We rely on a really enormous community of volunteers to keep Wikipedia running. The number of staff of the foundation is really remarkably small and so all of these projects, we have a mandate of course, our mandate is free knowledge, all the sum of human knowledge available to everyone for free, and we can’t do that with just a couple of hundred people at the foundation, it takes thousands of people to bring that vision about.
Craig's full question: So people when thinking of Wikipedia as probably, it’s in the top ranked sites in the world, not sure where it ranks but it’s pretty high. I would assume people really underestimate what it takes to test a site like that and you have very few testers really for the size and the usage of that particular site right?
Our entire QA staff I believe at this point numbers four, so we rely on our users for, really dedicated users who keep us informed about the performance of our features, and the usability of our features, but we also, we are coming to rely on volunteer testers as well and this is one of the things that I wanted to emphasise is that our test efforts are not only open for scrutiny like all the rest of our work, our source code, our server configuration, everything is open for scrutiny, it’s also open for participation that people can come along and actually participate in the running of Wikipedia, the biggest encyclopedia in human history and it’s usually the number five website in the world.
Craig: So I think one of the things, a lot of people viewing this may be QA or may know QAs and they work in a large organizations and are probably making software that they kind of go ”well it’s different, because it’s Wikipedia”, but there is still a lot of testing that has to go on to make the site that we know and love when we try to have a bar argument with somebody at 3 o’clock in the morning.
There is, there’s actually if you go back in the Computer Science literature, there is really 3 things you can do to improve the quality of software: one is code review. Code review is an enormous part of our development practice at the foundation and for all of Wikipedia, and another is fast feedback and we have been moving towards continuous deployment. When I started a little over a year ago, we had several releases in a year, about a year ago we moved to a two week release cycle, recently we moved to a one week release cycle, we are on a track to deploy software to Wikipedia as frequently as we possibly can. We don’t quite know how fast that’s going to be yet, but we are happy at the progress we’ve made. And the final thing of course is testing. A big part of what I do is browser test automation, this is a new project at the foundation and browser testing is notoriously difficult to do well. The project that I started, my intent is, I had two goals for this project, one is to make it absolutely world class implementation with the best tools available for browser test automation and the second goal was to keep the barrier to entry as low as I possibly could because as I said the more contributors we have to this project, the happier we’ll be.
4. So on the browser test automation, in a previous life that’s been something that you have been involved with in some of the other tools, tell us a little bit about your journey in relation to this field.
Sure, the very first useful open source browser test automation tool was called Watir, Web Application Testing in Ruby. I was user number one for Watir. Watir was maybe most famous, it was the first browser test automation tool commercial or open source to support frames and it supported frames because I asked for frames. Paul Rogers wrote the code for it, he told me it would take 3 days and I explained that I’ve been waiting for 3 years so he could take his time, and then later when ThoughtWorks released Selenium I happened to be an employee of ThoughtWorks at the time when the very first version of Selenium came out and so I was also one of the very first people to ever see Selenium as well, so I’ve been doing open source browser test automation for as long as it has been possible to do that.
Craig: And it’s interesting, you are talking about open testing now at Wikimedia, Selenium came very close I guess to not being an open standard.
Therein lies a tale, it may or may not have been possible to patent the mechanism by which Selenium version 1 actually worked, but we’ll never know because it was released as open source and so today we have this wonderful open source browser testing tool in Selenium.
5. And then I guess we’ve moved on to tools, I mean a lot of people may know Selenium as the Selenium driver which I think we both agree it’s bad practice to use the Selenium IDE to run your tests through, but more importantly in version 2 is the WebDriver.
6. So I want to come back to what you are doing there, but going back to Wikipedia, if I’m interested in helping out with testing, which is your kind of call, I’m assuming it’s the same as the rest of the site that everything is open and I can see exactly where you are at with testing or any of those other things right?
Yes, we have an astonishing number of communication channels, we have a set of wikis, probably the most central one is mediawiki.org, has all of our development information and project statuses and feature documentation. We have dozens of mailing lists, at list.wikimedia.org there are an array of mailing lists on every topic you can imagine. One of which is QA, it’s a very low noise, low traffic, very focused group of people on the QA mailing list, it’s a real pleasure to participate. But yes, anything, any question you can possibly imagine about software development at Wikipedia, the answer can be found somewhere on a mailing list, the the open archives, on a wiki page, we are very public and very open with every aspect of what we do.
That’s a good question, there is of course a reward, I mean a personal reward in helping keep the biggest encyclopedia in human history running, but as I said I do intend to make these projects absolutely world class, so if you want to put on your resume for example “contributed tests to the daily run of Wikipedia software”, you can do that. If you have a testing methodology that you want to prove out, that you want to show the world what you can do as a software tester, we will encourage you, we’ll not only allow you to do that, we will encourage you to do that, you are welcome. This really is an astonishing attitude and unlike any other atmosphere really in the commercial world for certain.
Craig: So one of the other things you talked about in your talk is that not only are the successes and the good things being talked about, but you’ve also tried a lot of things that I guess others can learn from, by sharing not only your successes but also things haven’t gone so well.
I have had some failed experiments, let’s put it like that, I have done some test activities that did not turn out as well as I would have liked and partly I’m certain that I had wrong assumptions but again the remarkable thing about that is that all of the documentation for these experiments both the successes and the failures are always going to be available. You can watch where I edited the wiki page for the test charters badly and had to go back and fix it later. All of my mistakes are as well documented as the success, so you can learn from the miss steps as well as the steps forward.
8. So if someone is watching this now and the go that sound intriguing to me, this might be good to learn something new or to give back, what are some of the steps that someone would go to start helping the cause?
I would suggest two things, one is the website at mediawiki.org, just browse, that is really the 99% of our software development effort is documented on mediawiki.org, and the second is join the mail lists. Join whatever one strikes your fancy but the QA list is highly recommended if you want to do software testing. There is a very friendly and a supportive crowd of people and there are a lot of newbies, a lot of people who have only begun to learn browser testing and test methodology, it’s a real pleasure to work with the newbies.
Craig's full question: So you are talking about the browser testing as I guess is something that you’ve got a passion in as well, something that you are trying to do better at the foundation. What are some of the things that people are going to come across and be able to help you in moving forward in that space?
We have, as I said, it’s my believe that browser testing is notoriously difficult to do well. We have a code review system in place at the foundation that code has to be reviewed and it often is reviewed for days and there are patch sets, sometimes dozens of patch sets as we hone this code to be as good as it possibly can. I don’t have very many people in a position to review code right now, we are also working on building shared test environments. Our shared test environments have aspects that are fragile, anybody with a background in System Administration who would like to help us with the test environment, that would be wonderful. The features are always evolving and my tests have to evolve along with them, I have failed tests all the time, anybody who wants to take a look at the failed tests and give us a little heads up that they found what looks like a bug that would be an enormously helpful and productive thing, not only for me but it would be a way for contributors to learn and to analyze and to figure out what state of the art browser testing looks like and what a state of the art test environment should be.
It’s an enormous range of anything you can image that is in place for Wikipedia itself is being used in our test environments. I have 2 main test environments, one emulates the entire cluster of Wikipedia wikis, it’s a big system and it’s getting very robust, it still need some attention, this is turning into our canary for ournascent continuous deployment project, we are automating things here before we automate them in production for Wikipedia itself. I also have a node on the cluster, it’s a peer to the English Wikipedia and the German Wikipedia and the French Wikipedia that we can use for test purposes as well, it’s running production database, production code, production everything but we can hack on that wiki anyway we want to, it’s a very powerful thing to have access to this kind of infrastructure.
We are building out a wide range of unit and API tests and we have an institutional continuous integration System with Jenkins that are running those. My browser test effort is somewhat aside from that for technical reasons having to do with operating systems and versions of software. The tests themselves are running against Wikipedia test environments but on a third party provider. So there is a wide variety of infrastructure in place and a number of ways to test the aspects of those infrastructures.
Yes, this is as I mentioned, I wanted to make the barrier to entry as low as possible and one of the things that Acceptance Test Driven Development gives you in Cucumber in this particular instance, is a way for people to contribute who understand how the software should work, but they don’t necessarily want to write code or analyze failures, but anyone that has an interest, a vested interest in a particular feature of Wikipedia is absolutely welcome to provide an automatable test for our community of test automation contributors. We have a number of people writing these automated browser tests today and we would love to have more people with a deep understanding and a vested interest in the continued proper function of feature X or feature Y. We invite you to contribute an automatable specification in the form of a Cucumber Given When Then statement, given some initial condition when I take some action then I should observe some result. This is our greatest need right now, is to have these experts on particular features who can craft very thoughtful acceptance tests for those features.
Craig's full question: Do you have any hope that some of the learnings you are going to get from this, from crowd sourcing testing, from opening up testing, are there things that you hope that perhaps the wider Agile community can learn from what you are doing and maybe even implement into your more traditional organizations?
Yes, I think one of the things that we provide is a fish bowl since everything is so public and both our successes and our failures are so visible, I think that I would be, my hope, my ambition is that we can take these successes and we can take these lessons learned and we can move them into other software development projects that may not be open, that may be commercial or they maybe enterprise. My particular hope as someone who had seen far, far, far too many browsers automation projects go down in flames, my most ambitious hope is that people can come and see what we are doing with our browser automation project and emulate our approach in, so that this type of testing becomes more widespread, more robust, more well supported, better understood.
I would distinguish between what we are trying to do at the foundation and a crowd sourced testing like something like uTest. uTest has an enormous ranks of testers with no specific domain knowledge, what I am trying to encourage for the foundation is to bring people in and give them that domain knowledge and have them be a part of the ongoing work rather than calling people in for a bug bash, for 30 minutes on a Saturday morning. I would really like to work with some people who have as I mentioned before, a vested interest, some people who have become domain experts and people who’ve become experts at executing these automated tests and analyzing these tests, and they understand that the test fails because the nature of the software that they’re testing has changed. I’m less interested in a uTest type of model with thousands of naive testers, I would far rather have a handfull of really expert testers or people willing to become expert testers.
Craig: But I guess by fixing that way of getting people up to speed on domain knowledge, hopefully then will translate to better learnings that we can have in enterprises where we bring on say new testers or even people from other departments to help out.
Absolutely and again I think that there are lessons here for anyone working in software development and software testing.
Craig: So I guess you would encourage if anybody is interested in either perhaps upskilling themselves into some newer technologies or just giving back to the community there is a place here where you can come and very quickly not only give something back but also learn something in the same process.
Very much so.
I use my name everywhere, I’m @chris_mcmahon on Twitter, I have a blog called Chris McMahon’s Blog, the foundation of course is, we have dozens of channels on IRC, we have mediawiki.org, we have the mail list at list.wikimedia.org. Whatever your favorite mode of communication is, somebody is listening for you.
Craig: So if they would like to know some more about what you’ve been talking about in general, perhaps reach out to you and if they want to contribute to Wikimedia then make sure they hit the QA mail list. Excellent, well thanks very much for your time Chris, it’s been a pleasure.
My pleasure, thank you!