Marcel Kornacker presents a case study of an EDW built on Impala running on 45 nodes, reducing processing time from hours to seconds and consolidating multiple data sets into one single view.
In this solutions track talk, sponsored by Cloudera, Eva Andreasson discusses how search and Hadoop can help with some of the industry's biggest challenges. She introduces the data hub concept.
In this solutions track talk, sponsored by MongoDB, Matt Asay discusses the differences between some of the NoSQL and SQL databases and when Hadoop makes sense to be used with a NoSQL solution.
Andy Piper describes some fundamentals of communicating reliably in an unreliable world and communication techniques used to build distributed data structures that can tolerate failures.
Omer Kilic provides an overview of heterogeneous computing discussing how Erlang can help with the orchestration of different processing platforms, introduces Erlang/ALE + updates on Erlang Embedded.
Jamie Allen describes three patterns using Akka actors: handling a lack of guaranteed delivery, distributing tasks to worker actors and implementing distributed workers in an Akka cluster.
Akmal B. Chaudhri introduces Apache™ Hadoop® 2.0 and Yet Another Resource Negotiator (YARN).
Eva Andreasson presents typical categories of problems that are commonly solved using Hadoop and also some concrete examples in each category.
Sean Owen provides examples of operational analytics projects, presenting a reference architecture and algorithm design choices for a successful implementation based on his experience Oryx/Cloudera.
Jonas Bonér, Francesco Cesarini discuss the evolution of distributed concurrent thinking along with the problems it has to solve and the toolchains created along the way.
Heather Miller presents attempts at better supporting distributed programming in Scala, including a new fast pickling framework, as well as Spores - composable pieces of mobile functional behaviour.
Josh Wills discusses using Hadoop technologies to build real-time data analysis models with a focus on strategies for data integration, large-scale machine learning, and experimentation.