Bio Graham Lee is a mobile app developer and security consultant at Agant Ltd. He is also the author of three books on writing software. The most recent of these is APPropriate Behaviour, the book about things programmers do that aren't programming. Graham is based in Leamington Spa in England's midlands.
Software is changing the world; QCon aims to empower software development by facilitating the spread of knowledge and innovation in the enterprise software development community; to achieve this, QCon is organized as a practitioner-driven conference designed for people influencing innovation in their teams: team leads, architects, project managers, engineering directors.
1. I’m interviewing Graham Lee at QCon London 2013, Graham is security boffin and lead developer of the Discworld App at Agant limited, based in Leamington Spa. Graham, how did you get involved with Discworld App?
Well that was something that – I was talking to Dave Addey, the managing director of Agant about the possibility of working with him, and he said “Oh, we’ve got this really cool project coming out, but I can’t really tell you what it is yet.” So it was kind of my first introduction to the company was “How would you like to develop this app with Terry Pratchett?” As soon as I picked my jaw up off the floor I thought “That sound like a good idea, why don’t we do that?” So it took about 6 months to do I suppose but it was really something that was kind of “in” from day one, they’d already had the pitch process with the – it was actually funded by the publishers Transworld and so they’d already gone through the pitch, had their pitch approved and so on. So essentially I had the task of taking this great series of books that I’ve been reading for 20 years and making a thing out of them, which was both great and really really scary.
The app is mainly based on a book called “The Compleat Ankh Morpork” which was written by the people at Discworld Emporium, which is the world’s only, as far as I’m aware, Discworld centred shop down in Wincanton. So they have produced this book that has the kind of A to Z of Ankh Morpork and the street index and these fantastic hand-drawn maps of the city and largely we were adapting that content for the app, but we’ve also went through some of the Discworld novels themselves and extracted quotes that were relevant to different characters, different locations, we had some nice artwork that was provided, some of it was from the Discworld board game, some of it was from The Compleat Ankh Morpork book, so largely the content existed and we were just adapting it to the medium.
Well yes, it’s a very different experience using the app than reading the book. The book as I say, it’s kind of an A to Z guide to various businesses in the city and it’s got some long form text sections on the guilds and on tourist information and so on. It’s as if you’ve just arrived in Ankh Morpork and this is your guide book. But because we had this touch screen device we knew that we could zoom in on this map and have you pan around in it and see, getting so close to the streets that you could see people walking round the city; so that’s what we did, we added these people who seem kind of to have their paths that they wander around to the city, you could see some of the characters from the books, like Rincewind the wizzard wandering around, smoke coming out of the chimneys. My first task on the project was to build a version of the Apple MapKit framework that didn’t rely on the world being round which turns out to a bit of a significant obstacle to using it in this app.
4. It sounds like if you’ve got this kind of Marauders Map of people walking around, and going to the level of detail of smoke going out of the chimneys, that could be incredibly taxing especially for some of the older, first or second generation type devices. How did you do all of this and yet still keep the performance fluid?
This was actually one of the main risks running throughout the whole project, was that we were taxing the hardware as much as it could go and in some cases trying something or just dialing back a bit until it started working properly and then building other things and then realizing that the interaction between these features caused a different performance problem. So there’s a lot of graphics in there. It takes a long time to load the graphics off of the solid state storage we found, and then once you’ve got them loaded they use a lot of memory. So from basically about a month into the project I was frequently running the app in Instruments, you know, Apple’s performance measuring tool, rather than just in the IDE directly and either rejecting or reworking changes if they caused more of a performance cost in terms of pretty much any dimension, CPU time, memory consumption or storage. We couldn’t really trade one off against the other because we’ve run up to the limits of everything. It turned out to be a really hard problem throughout the project.
I actually liked that you’ve used that quote, because I found that it’s often taken out of context to mean a completely different thing from what I think Knuth was talking about. It is from Knuth’s paper “Structured Programming with Goto Statements” and the bit that everyone quotes and that you were referring to, says “premature organization is the root of all evil.” It’s in a paragraph where he says that the important thing is to know what the efficiencies and the inefficiencies of our programs are, to find the painful bits and to optimize them. So the premature optimization refers to optimizing the bits that aren’t the problem. In the case of the Discworld app, the performance, or rather the available resources, the limited memory of an iPad, the comparatively slow ARM CPU and the limited storage. I mean this app is about 750 MB download. All of these 3 things together meant that the thing was severely constrained, so no, it wasn’t premature optimization at all; it was trying to find a way for the thing to run. I mean we eventually had to decide that we couldn’t support the iPad 1, because any optimization we made to try even make it run at all on that early device, that’s got a lot less RAM and a much slower single core CPU than all of the later iPads, just really kind of hobbled the experience on the newer hardware.
6. The amount of graphics and the amount of content that comes on there must be taxing both in terms of the download initially but also in terms of loading and displaying them. Do you have different qualities or different versions of the artwork for retina and non-retina devices for example, or when you’re in a zoomed out mode of the map flat world kit, that you have one level of granularity for when you’re zooming far away and then obviously you’re replacing that with more detail level when you zoom in?
So, on the map we use a thing called a tiled layer which Core Animation provides that is a view layer that’s is a Core Animation layer that allows you to draw square sections of the map independently from each other and as you zoom in we load higher and higher resolution tiles. When you’re zoomed out completely we turn the tiled layer off and you’re just seeing a static image of the whole map. Now the map as it was supplied by the Discworld Emporium – I can’t remember the specific resolution but it’s about 22,000 pixels by 17,000 pixels. It’s a large image and I think we’ve got it as a 1.2 GB Photoshop file or something. So yes, getting that into a state where you could display it at high enough resolution to look good on a retina display without sucking up a load of memory and also taking time. JPEG images obviously take up a lot of less space but in return for being quite heavily compressed, so you’ve got to run through this compression algorithm. So the more you try to save on space the more you end up costing on CPU time. What we ended up doing for a lot of the, particularly the animation artwork, was using PNG images, but using PNG images with an indexed colour palette rather than a bitmap colour palette, which meant that instead of being 32 bit colour with 8 bits per channel and an alpha channel, they just had a collection of colours specified in an array and then an index into that array to say what colour each pixel was. That saves a lot of space over a regular PNG image, which means it’s quicker to load from the disk but then you find that you have to spend time decompressing this palette information before you can put the thing on to the GPU. So it actually ends up takes the same amount of time to load one of these compressed images as it does the original, but made the download a lot smaller.
We had a few different strategies. For a start a lot of the components in the app are unit tested. There’s bits where I didn’t think that that was appropriate, so there’s a lot of custom drawing code and it can be hard to work out what it is that makes a drawing correct. Can you assert that a drawing has the properties you expect. That’s a hard question to answer and also because you are drawing into essentially a global variable, which is the Core Graphics context, it could be very hard to set up a mock Core Graphics environment to test, so that bit we didn’t test so much, but a lot of the logic, there’s an achievement system that uses game center that is 100% test-driven, the map kit itself was test-driven, so a lot of the application’s logic was built in a test-first style and then we use the Jenkins continuous integration system to ensure that we weren’t introducing any regressions. From a very early stage in the project we’ve had internal alpha testers who were trying to use the app, reporting problems, reporting crashes. I think we’ve only had about 3 or 4 crashes throughout the entire development cycle which might be more luck than judgment, I’m not sure. And then we had quite a long beta phase over the Christmas period with a bunch of beta testers, Discworld fans and developers and of course, developers who are Discworld fans as well. We also got external testers in to do a kind of systematic QA sweep of basically all of the features in the app.
One of the most difficult things to test, just from the perspective that it was hard to see how to automate it, was that people have to walk behind buildings on the streets. The map is kind of at a perspective projection, but the map we were given was a scan of this hand drawn absolutely huge, as I’ve said 100 giga pixels, or however big it was, image. So Dave, through some Herculean effort, went over it in Photoshop, traced all of the roofs of the buildings using a Cintiq tablet and then extracted these as images, so we could draw the map, then draw the people walking on the map, then draw the buildings on top of the people. The question is, if you could see all of a person, is that because you should be able to see all of them, is it because there’s a building that we should have drawn over that’s missing, is it because there’s a building that’s there but the Z-ordering is wrong? That just took a lot of manual staring at the app, waiting for people to walk past particular buildings to work out whether they were being masked correctly or not.
Alex: Sounds like a bit difficult to hook up in Jenkins.
We use a script called ocunit2junit.rb which takes the output from OCUnit (which is built into XCode), so it was really easy for me and the other developers to start to write our tests in OCUnit, and this script produces the same XML output that JUnit does and Jenkins knows how to parse this and how to report failures to the dash board. So as well as getting build failures and we had the “treat warnings as errors” flag on, so any compiler warning would fail the build and would be reported. We also had the clang static analyzer; there’s a Jenkins plug-in that allows you to run the static analyzer on a built product. So any reports from that tool would also be marked as build failures and it produces an HTML report that can be read as part of the build output. And finally the OCUnit tests were being reported in a way that Jenkins could interpret and present in its dash board as well.
Well, what I would say is that there are a few buildings, particularly some of the land marks that maybe you should just hang over them, have a look at some of the larger buildings, see if someone, for example, were to poke put of the window.
I’m currently working on a project that, again for Agant, with a company doing some augmented reality work. So this is about image recognition and about a overlaying content on top of, in this case, a real product. I’ve done some augmented reality work before. I used to work for O2 and we made an app called “Up at the O2,” which is about an experience they’ve got where you can climb up over the O2 (what used to be the Millennium Dome in London) and you could see the views from the top and this app overlaid information about which building was which, so if you didn’t know that the big pointy one was the Shard, you could see this in the app. And that’s kind of location-based augmented reality. And the thing I’m working on at the moment is about kind of adding the App’s UI as if it’s in the universe of an object, so you might have something that, if I was holding the app up towards of the wall of this studio you would see something that’s in the plane of the wall rather than being just a static thing that’s presented over the iPhone screen.
Alex: So presumably you have to do quite a lot of both analysis of the image to find out where the flat perspectives are, and then do the necessarily mapping in 3D to make it look like it was a poster on the wall.
Yes. I’ve learned more about OpenGL projections matrices I have ever intended to, doing this project.
Alex: Well, we’ll certainly look out for that in the AppStore, and the Discworld App is available now, and your presentation on “Testing iOS Apps” will be available on InfoQ. Graham Lee, thank you very much.