Robin Johnson discusses using a data management model for games that can be scaled, and the bottlenecks and challenges met by OMGPOP scaling to millions of users.
Volker Pacher explains why Shutl chose Neo4j when faced with the need of building a new API meant to support business growth, the challenges met during implementation and solutions applied.
Andrew Clegg overviews methods and provides use cases for performing data sets operations like membership testing, distinct counts, and nearest-neighbour finding more efficiently.
Vaclav Petricek digs some of the romantic interactions nuggets hidden in eHarmony's large collection of human relationships.
Michael Kopp explains how to run performance code at scale with Hadoop and how to analyze and optimize Hadoop jobs.
Craig Brozefsky presents the tradeoffs involved with moving to a purely SQL relational model, instead of using an ORM, along with some of the tools built to facilitate this.
John O’Hara discusses banking business and technology integration, covering: low-latency, high-frequency trading, in-memory caches, multi-terabyte time-series databases, complex contracts in NoSQL stores and advanced systems integration.
Tormod Varhaugvik provides a design and rationale for an In Memory and Big Data architecture for live equity and risk assessment, using Tax Norway’ new architecture as an example.
Nathan Marz shares lessons learned building Storm, an open-source, distributed, real-time computation system.
Eli Collins overviews how to build new applications with Hadoop and how to integrate Hadoop with existing applications, providing an update on the state of Hadoop ecosystem, frameworks and APIs.
Dean Wampler supports using Functional Programming and its core operations to process large amounts of data, explaining why Java’s dominance in Hadoop is harming Big Data’s progress.
Francine Bennett keynotes on using big data in the cloud.