Dmitriy Ryaboy shares some of the lessons learned scaling Twitter’s analytics infrastructure: Data loves a schema, Make data sources discoverable, and Make costs visible.
Manvir Singh Grewal and Brandon Byars propose a business intelligence workflow along with Lean principles and practices for implementing a data warehouse and reporting capability.
Nathan Marz introduces Twitter Storm, outlining its architecture and use cases, and takes a look at future features to be made available.
Rob Lancaster explains the steps made by Orbitz in order to bridge the gap between their data in the data warehouse and the data in Hadoop.
Bhaven Avalani and Yuri Finklestein discuss 4 aspects encountered at eBay when dealing with monitoring data: reduction of data entropy, robust data distribution, metric extraction, efficient storage.
Pavlo Baron presents a big data case, a solution and the tools for collecting, mining and storing large amounts of data without using Hadoop or Java.
Eric Brewer takes a look at NoSQL’s history and considers what should be done so the current NoSQL solutions to evolve in order to address the full range of the application needs.
Cliff Click discusses RAIN, H2O, JMM, Parallel Computation, Fork/Joins in the context of performing big data analysis on tons of commodity hardware.
Eli Collins introduces Hadoop: why it came about, the benefits it produces, its history, its architecture, use cases and applications.
Dhruba Borthakur discusses the different types of data used by Facebook and how they are stored, including graph data, semi-OLTP data, immutable data for pictures, and Hadoop/Hive for analytics.
Serkan Piantino discusses news feeds at Facebook: the basics, infrastructure used, how feed data is stored, and Centrifuge – a storage solution.
Adrian Colyer, Juergen Hoeller, Mark Pollack and Graeme Rocher present SpringSource’s Unifying Component Model, current developments regarding Big Data, and betting on Grails.