BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Concurrent and Distributed Programming in the Future

Concurrent and Distributed Programming in the Future

This item in japanese

Bookmarks

The world is concurrent with everything around us asynchronous and event oriented. Concurrency and the cloud are things every developer will have to deal with in the future, Joe Duffy claimed in his keynote at the recent QCon London conference. At the heart of this is communication, which is essential both for concurrent and distributed systems.

For Duffy, previously Director of Engineering for languages and compilers at Microsoft, distributed is really concurrent; essentially it’s multiple things happening at the same time. The only difference is that things are happening further apart from each other, for example, in different processes, servers or data centres. And this difference matters, especially for communication. Using shared memory or going over a network with latencies at the millisecond level brings different constraints and capabilities, and will affect the system architecture.

Both concurrency and distributed programming have the same roots in early computer science. In those early days, problems were often modelled from an asynchronous perspective. For Duffy, Butler Lampson is one of the greatest thinkers in building distributed systems, especially when it comes to reliability, and he highly recommends Lampson’s paper, System Design, from 1983. When the multiple core CPUs started to appear in the 2000s Duffy claims that we didn't invent a single thing in the concurrent programming space. Instead, we went back to the early ideas and papers published.

In the future, Duffy expects to see a return of distributed programming with increasingly fine-grained distributed systems that will make systems look more and more like classic concurrent systems. We have learnt a lot about building concurrent systems, and he accentuates seven key lessons:

  1. Think about communication first. It needs to be part of the architecture of any distributed application. Ad-hoc communication leads to reliability woes. Actors and queues are examples of good patterns.
  2. Schema helps, but don't blindly trust it. The server always changes at a different rate than clients, and Duffy points to Internet as a good reference that works very well.
  3. Safety is important, but elusive. Lack of safety creates hazards through races, deadlocks or undefined behaviours. The preferred form of safety is for Duffy to be isolated. If that's not possible you should be immutable. If that's not possible either, you have to resort to standard synchronization mechanisms.
  4. Design for failures, because things will fail. Duffy thinks we should design for replication and restartability and notes that error recovery is a requirement for a reliable concurrent system.
  5. From causality follows structure. The cascade of events that leads to an action being taken can be very complex in a concurrent system. A context that flows along can simplify keeping track of all that happens.
  6. Encode structure using concurrency patterns to make it easier to understand a system. Two of Duffy's favourite patterns are Fork-Join and Pipeline.
  7. Say less, declare/react more. Declarative and reactive patterns are great for delegating hard problems to compilers and frameworks. He sees Serverless as a specialization of this idea with a single event and a single action.

Duffy concludes by emphasizing that the future is distributed and he expects to see even more inspiration from the pioneers in distributed programming. Our current programming languages have great support for concurrency and he expects them to increasingly have more support for distributed and cloud programming, as well as his seven key lessons built in.

Rate this Article

Adoption
Style

BT