Carl Hewitt keynotes on the Actor Model and ActorScript, providing examples of using them for large-scale datacenters and IoT.
Giannakakis and Dalkitsis present how Shazam releases faster, more predictably and with more features by using BDD and automation testing, without slowing down or hindering the development process.
Neha Narkhede discusses how companies are using Apache Kafka and where it fits in the Big Data ecosystem.
Christopher Meiklejohn looks at applying two techniques together, deterministic data flow programming and conflict-free replicated data types, to create highly available and fault-tolerant systems.
Michael Brunton-Spall shows how DevOps-like patterns can be applied on microservices to give the development teams more responsibility for their choices, and much more.
Phil Calcado shares the toolkit and strategy SoundCloud uses to keep its microservices explosion manageable, dealing with operations overhead, DevOps, breaking changes and asynchronous behaviors.
Diptanu Choudhury discusses the design of Netflix’ distributed scheduler based on Mesos and Titan, focusing on bin packing algorithms, scaling in and out of clusters, fault tolerance, and redundancy.
Sandy Ryza aims to give a feel for what it is like to approach financial modeling with modern big data tools, using the Monte Carlo method for a a basic VaR calculation with Spark.
Kin Lane discusses the opportunities of deploying high value, re-mixable APIs, using Docker.
Anil Madhavapeddy explains how the OCaml module system enables the construction of a large scale OS software, and also the resulting portability benefits.
Piotr Kołaczkowski discusses how they integrated Spark with Cassandra, how it was done, how it works in practice and why it is better than using a Hadoop intermediate layer.
Dave Farley looks at a history littered with inefficient processes resulting in poor quality and failed projects, wondering how we got here, what can be done and what does good really look like?