Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Technologies for the Future of Software Engineering

Technologies for the Future of Software Engineering

This item in japanese

The Cloud, infrastructure as code, federated architectures with APIs and anti-fragile systems: these are technologies for developing software systems that are rapidly coming into focus, claimed Mary Poppendieck. She spoke about the future of software engineering at GOTO Berlin 2016.

If you have too much data to fit on one computer, you have two options: scale up or scale out. Scaling up by using a bigger computer is usually not the right direction to take, argued Poppendieck. You need to scale out by getting more computers and build systems of computers.

Poppendieck mentioned different ways to scale out:

  • Scale out files: an example is the approach that Google uses for searching. Files are split up into small pieces which are copied to multiple servers. Then the searching is done in parallel, and the results from all servers are combined into one search result.
  • Scale out architecture: Amazon does this by breaking transactions into services and using specific servers to do the services. If there’s a bottleneck, then you can replicate the service on multiple servers. Each service is owned by a semi-autonomous "two pizza" team.

Systems are moving more and more towards the cloud; there is a cloud in your future, said Poppendieck. She stated:

The cloud is cheaper, more stable, more secure, and more expandable than most on-premises data centers.

Transforming existing application to cloud-based applications can be challenging. Poppendieck quoted Arthur Cole from IBM:

Applications designed for traditional data architectures do not work well in the cloud without a lot of recoding.

Poppendieck showed several solutions for infrastructure as code which already exist:

  • Containers to standardize and automate your processes.
  • Serverless architecture provides flexible computing capacity at a lower cost.
  • Software defined networks to scale using software instead of hardware.

Having one central database creates a dependency problem as all applications depend on the database, and changing the database can impact many applications. Poppendieck stated that "the enterprise database is a massive dependency generator". Individual teams cannot deploy autonomously as their work has to be coordinated with all other teams sharing the database. Instead of having one database, you need a federated architecture where you split into local data stores appropriate to the needs of individual modules or services, and accessed only through APIs. APIs are replacing central shared databases and they enable the internet of things. You have to learn to do software engineering with APIs, argued Poppendieck. Look at APIs as products which are owned by a responsible team, and focus on the API customers to evolve and develop new capabilities.

We need to stop trying to make systems failure free and start thinking differently, said Poppendieck. Many of our current systems are fragile; they started as robust systems but over time they have become difficult to maintain. Systems nowadays need to be anti-fragile, and able to embrace failure, claimed Poppendieck. When things go wrong, systems should be capable of limiting damage and recovering from failures.

You get anti-fragile systems from the way you test systems, which is by failure injection to make something go wrong. Poppendieck mentioned that your systems need to isolate and automatically recover from failure, in order to achieve today’s expected levels of availability and robustness.

Poppendieck mentioned aspects that are essential nowadays for developing software. She said that you need a deployment pipeline to be able to do continuous delivery, and you need cross-functional teams including product management, testers, and operations to get the benefits promised by continuous delivery. The deployment pipeline depends on automated processes for testing, migration, and deployment. Continuous delivery requires all teams to communicate through the codebase by doing continuous integration to the trunk. Teams keep the software always production-ready; if that’s not the case you have to stop and make it so. While deployment is continuous, release is incremental by toggle or switch whenever a useful increment or capability is ready.

Continuous delivery provides essential end-to-end feedback, argued Poppendieck. Research indicates that product managers are wrong half the time, and that two third of the features and functions in a specification are unnecessary. This is a consequence of trying to decide what to build in detail before trying experiments to see if a feature really addresses the problem at hand. The real value from Lean/Agile development practices is that they can provide fast feedback from actual usage to ensure developing good solutions to the problems being addressed. Poppendieck suggested to go from delivery teams to teams solving problems within meaningful constraints .

Develop systems using the fundamental engineering process of experimentation, learn within real constraints, and start with patterns or signals, not with requirements or features, recommends Poppendieck. Next, focus on problems and plan the work using hypothesis. Based on those, you do multiple experiments and use the data to decide how to continue.

Rate this Article