BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Learning Fast in Design, Development and DevOps

Learning Fast in Design, Development and DevOps

Leia em Português

 

Delivering the right products fast can be challenging, certainly when there are many unknowns along the way. If you want to build products fast in a context of high uncertainty you need to be able to learn fast and efficiently said Ismaël Héry from Le Monde. At the Lean Kanban France 2014 conference he gave a presentation about learning fast to build fast.

Le Monde is a French news brand that publishes papers and provides online content via free and premium websites and mobile apps with millions of readers each months. They are looking for ways to compensate the drop in print revenues with online revenues. When developing new product they are faced with lots of unknowns. In their market it is important to deliver great products fast, so they are constantly looking for ways to shorten the path to delivery and satisfying the needs of their customers.

InfoQ did an interview with Ismaël about maximizing learning in product definition, software development and operations, tools for development and operations, establishing a DevOps culture and improving collaboration, and increasing the pace of learning.

InfoQ: Can you explain what kind of problems you experienced in product definition and UX design?

Ismaël: As anybody facing new product development challenges, the main questions we face are:

Who are our users? What are their main problems? For instance, when building our new CMS we needed to recognize and understand the very diverse profiles of journalists that we have at le Monde, in terms of age, main objectives, geekiness, level of pressure at certain point of time etc.

Will our solutions work as expected once in the user’s hand? E.g. in 2013 we developed a new premium Home page (la “Une” en français) for paying subscribers. This new home page was structured and animated completely differently compared to the free home page. The questions we faced were: does it work from an editorial point of view? How often should we update the content in this part? How many journalists do we need that make that part to work?

And of course for every feature used by readers we wonder if the reader needs it? How he/she will use it?

InfoQ: What about development and operation, what were the main problems in those areas?

Ismaël: In terms of learning, any software development effort brings a lot of unknowns that need to be answered/discovered during the project:

Should we use that framework or that one? Lately we moved to a full Javascript stack for our CMS: what are the most appropriate framework, libraries, testing tools in our context?

How long does it take to develop that kind of feature? Even when variability is very high you want to know the typical effort needed for your typical feature (e.g. average complex screen or interaction) as fast as you can.

Is it possible to have automated tests with this new technology? How easy will it be to write those tests? To integrate them into CI? How long will the team will need to master writing those tests while complying with the best practices (still unknown themselves)?

On the ops side, typical unknowns we face are:

  • What is the best strategy to keep up with the rhythm of updates of this new technology?
  • What would be the main weaknesses of this system once in production?
  • What are the most important metrics and dashboard we should have? What about alerting?
  • When user load increases on the system we wonder where the next bottleneck will appear when we get 10x more users? At the database level? On the web servers? Elsewhere?
  • How can we set up a zero down time deployment mechanism on this new system?

Any project/product planning we do is implicitly based on a lot of assumptions about those questions (or worse, ignoring that those questions exist and need answers) but at the beginning we really have no idea about most of them.

Optimizing the execution with traditional best practices is very wise, optimizing the learning and the discovery on those questions is crucial!

InfoQ: You explored different ways to maximize learning in the domain of product definition, software development and operations. What kind of practices did you use? Can you give some examples?

Ismaël: For the product part:

We’ve been doing design studios for more than a year: it’s very short sprints of rapid prototyping on paper with several users or key contributors. It generates a lot of ideas very fast and can kill dead ends very soon!

We also do more and more rapid prototyping with interactive wireframes created by the product managers themselves. Those (sometimes very convincing) prototypes allow us to get user feedback very soon and are a lot cheaper than actual coding.

User testing is of course another key element in the product manager's tool box. We don’t do it systematically but month after month we test our assumptions better, by observing users interacting with our products in conditions as close as possible to the real situations.

Those practices and tools may seem pretty obvious now but it still a big cultural shift from 'thinking we know best what our users want and are willing to pay for' to relying more on discovery and user testing. This movement is still ongoing, but we had really impressive improvements recently!

We also favor practices that maximize learning for all the actors whether they are from product, devs, or ops:

Very early deployment to production is the main one. We tinker our technical systems and project plans a lot to be able to deploy parts of the end product to production quickly. At this stage it’s often really broken or sometimes not even usable by the end user, but even in that case the learning is HUGE:

  • it focuses product people on the real number one problem. We have plenty of ideas and deep belief of what is problem number one… until we see real users using our product in real conditions.
  • it focuses tech people on the real problems. We need to think of what may be the weakest element or where the system will fail when load increases etc. Early deployment to prod means early undebatable reality checks on those matters.
  • we have complex web systems that we can’t reproduce completely before production. From an architectural point of view, a lot of learning can not occur in environments before deploying to prod.
  • of course we face a lots of incidents with those (too) early deployments to production. This way lots and lots of post mortems are done during the project and we end up with a more robust and well managed system (monitoring, alerting, logs..) when the real release date comes in.
  • not about learning per se, but touching production very early increase moral and commitment!

Of course deploying very early comes with costs, mainly in engineering but also in product development when you spend a lot of time and energy with your beta users, trying to get them to use your not yet finished product and gathering feedback.

And finally, we also use a very powerful practice to maximize learning when we do prioritization: when facing a choice between two paths with grossly the same return on investment we prefer to choose the path of potential maximum learning.

InfoQ: Can you give some examples how you do user testing? What did you learn from observing users?

Ismaël: For the most mature teams/products we typically identify an area and hypothesis we want to verify (typically, “does this feature solve this user problem?” ). The product manager, sometimes supported by an UX expert, then prepares the test itself. The test is a sequence of questions asking the user to try to do something.

For each exercise we have a checklist to ensure we get all the information we can from the test:

  • what hypothesis do we want to test?
  • what is the exercise asked to the user?
  • did the user get it slow/fast? did he need help (=>failure)?
  • how many mistakes?
  • any user’s hidden or explicit signs of appreciations?
  • anything worth being noted?

There are always two persons, one doing the exercise and interacting with the user, as lightly as possible, and a scribe who takes notes of everything silently.

Every run of user test taught us a lot. Things we thought would be no brainers turned out to be very hard to grasp, on the contrary things we thought to be very hard turned out to be obvious for the users!

The meta learning we continuously get is that our intuitions about user behaviors are wrong almost 50% of the time …

InfoQ: Which kind of tools do you use to do develop products and to do frequent deployment?

Ismaël: We have several technical platforms: PHP for the web front end, iOS and Android for the mobile apps, and nodeJS for our CMS.

We use custom tools for web deployment, it does mostly what capistrano does but we are not very used to ruby and devs and ops love to customize those tools to fit the context.

Beyond that we have a classical tool box with git and Jenkins. We write lots of unit tests on the most recent stacks, and end to end tests simulating users on the interface.

We use Graphite/Grafana and Kibana for metrics and logs analysis and M/Monit and PagerDuty for alerting. The dashboards we have with those tools are critical to push to production very frequently and be able to detect problems easily, deep dive in logs and metrics and seek correlations between events (typically a push to productions and a change of performance).

InfoQ: Are there any other technical practices that you are using that are worth mentioning?


Ismaël
: We don't do a lot of pair programming but systematic code reviews are now a part of our technical culture and habits.

InfoQ: Since the domains you mentioned are related you needed to balance learning’s. Can you elaborate how you did that?

Ismaël: When responsible for getting the product design, build and operated, we are not interested in having the world top performing UX team if devs and ops are not able to push to production at least once a week. On the contrary what’s the point at delivering and even deploying “working” software every day if UX is a joke?

I think teams and management have great responsibilities in having an holistic understanding of what’s needed to get the product done, and keep an eye on the best ways for doing activities (don’t be obsessed only by Lean UX or Agile Software Development or DevOps). We prefer to have a coherent level of learning on those different domains.

InfoQ: Can you give some examples of what managers have done to support collaboration between development and operations?

Ismaël: As managers trying to increase collaboration between dev and ops, we 1. remove frictions and empower the devs, 2. increase team working and empathy.

To removing frictions and empower devs, we try to avoid relying on ops when devs can do it correctly and safely by themselves:

  • Give devs the ability to push to production
  • Give devs the ability to use and manage monitoring dashboards
  • Give devs access to production servers
  • Have some devs on call (for some of our products)

We still have a lot of improvement ahead, particularly to be able to set up dev and test environments more easily and without huge ops efforts.

To increase empathy:

  • We organize architecture meetings with both of them when we have the feeling they may avoid or simply forget each others on a particular point
  • Ops are invited to regular and major project meetings (stand up meetings, sprint rituals…)
  • Devs are invited to ops tech talks and ops to devs tech talks
  • We animate a lot of post mortems and pre mortems (before a big roll out or before presidential election night for instance) with both devs and ops. Those moments are really exciting from a manager’s point of view, because you observe incredible creativity, collaboration and team working that lead to counter measures or improvements that none would have thought in isolation!

Of course we are looking for natural collaboration without relying on management interventions, but sometimes management needs to support it.

InfoQ: Can you share how this learning initiative helped to establish a DevOps culture and improve the collaboration between development and operations?

Ismaël: It’s mainly about pushing very soon to production during the project and then doing that very often. Thus dev and ops have more time to face typical dev/ops border problems and to solve them together all along the project, not just at the end in the worst period where dev is overwhelmed with bugs and feature requests, and ops in panic with incidents and load increase. It’s a way to avoid the typical throw-over-the-wall-at-the-end effect that really damages the relationship.

But this approach needs coaching and management attention since is still hard to accept for some devs and ops that like to avoid contact until the last possible moment…

InfoQ: Can you share the main learning’s from Le Monde for people who would like to increase the pace of learning in their organization?

Ismaël: Deployment in production generates by far the biggest amount of knowledge, for all the actors (product, dev, ops) and by an order of magnitude.

Leveraging practices and tools that favor both execution and learning is very powerful.

Personally, after years of managing and coaching team designing, building and operating new products I view the learning side of the equation as important as the execution side, even if customers and stakeholders pay for what got executed in the end of course!

About the Interviewee

Ismaël Hery lives in Paris. He manages or coaches IT, product and management teams, to help them build and operate great products and services effectively and efficiently. He has worked with companies ranging from medias to banks and industries. He is currently working at the biggest French news brand le Monde. He picks tools and insights in Lean, agile, devOps, coaching and leadership. Sometimes he writes stuff at www.behindthatquote.com.

Rate this Article

Adoption
Style

BT