Dean Wampler argues that Spark/Scala is a better data processing engine than MapReduce/Java because tools inspired by mathematics, such as FP, are ideal tools for working with data.
Matei Zaharia talks about the latest developments in Spark and shows examples of how it can combine processing algorithms to build rich data pipelines in just a few lines of code.
Bob Kelly presents case studies on how Platfora uses Hadoop to do analytics for several of their customers.
Jayesh Thakrar shows what can be done with irb, how to exploit JRuby-Java integration, and demonstrates how the Shell can be used in Hadoop streaming to perform complex and large volume batch jobs.
Carlos Queiroz introduces the lambda architecture and showcases how it can be implemented with SpringXD, GemFireXD and Hadoop in a CDR(Call Detail Record) mining application.
The authors explain how the Pivotal team leveraged familiar SQL-based queries to analyze fine-grained cluster utilization using Spring XD.
Mohammad Quraishi presents implementing a Big Data initiative, detailing preparation, goal evaluation, convincing executives, and post implementation evaluation.
The speakers show how to provide a scalable runtime environment, that is easily configured and assembled via a simple DSL.
Gabriel Gonzalez introduces TSAR (TimeSeries AggregatoR), a service for real-time event aggregation designed to deal with tens of billions of events per day at Twitter.
Steve Hoffman, Ken Dallmeyer share their experience integrating Hadoop into the existing environment at Orbitz, creating a reusable data pipeline, ingesting, transporting, consuming and storing data.
Claudia Perlich discusses privacy-preserving representations, robust high-dimensional modeling, large-scale automated learning systems, transfer learning, and fraud detection.
Paco Nathan keynotes on how Spark fits into the big data landscape, describing what other systems work with Spark, and explaining why Spark is needed in the future.