Viktor Gamov covers In-Memory technology, distributed data topologies, making in-memory reliable, scalable and durable, when to use NoSQL, and techniques for Big In-Memory Data.
Todd Montgomery challenges some of the common myths and misconceptions about high performance streaming data, and takes a look at what is really possible today.
Sebastian von Conrad advises on reporting: capturing the right data at the right time, best practices and cleaning up reporting debts in code bases.
Viktor Klang explores fast data streaming using Akka Streams - how to design robust transformation pipelines with built-in flow control able to take advantage of multicore and go over networks.
Joe Stein introduces Mesos and managing data services on it, presenting use cases for replacing classic solutions (like cold storage) with new functionality based on these technology.
Sharad Murthy & Tony Ng present Pulsar, a real-time streaming system which can scale to millions of events per second with high availability and 4GL language support.
Matthew Renze introduces the R programming language and demonstrates how R can be used for exploratory data analysis.
Felienne Hermans presents various algorithms that outlining the power of Excel, showing that spreadsheets are fit for TDD and rapid prototyping.
Neha Narkhede discusses how companies are using Apache Kafka and where it fits in the Big Data ecosystem.
Matt Zimmer discusses architectural patterns -service decomposition, stateless application tiers, and polyglot persistence- and migration strategies used by Netflix.
Vaclav Petricek discusses how to train models, architect and build a scalable system powered by Storm, Hadoop, Spark, Spring Boot and Vowpal Wabbit that meets SLAs measured in tens of milliseconds.
Christopher Meiklejohn looks at applying two techniques together, deterministic data flow programming and conflict-free replicated data types, to create highly available and fault-tolerant systems.