Stefan Krawczyk discusses how StitchFix used the cloud to enable over 80 data scientists to be productive and have easy access, covering prototyping, algorithms used, keeping schema in sync, etc.
Tom Gianos and Dan Weeks discuss Netflix' overall big data platform architecture, focusing on Storage and Orchestration, and how they use Parquet on AWS S3 as their data warehouse storage layer.
Mike Olson presents several use cases where big data is collected and analyzed to gather insights from the automotive, insurance, financial, and other sectors.
Scott Clark introduces Bayesian Global Optimization as an efficient way to optimize ML model parameters, explaining the underlying techniques and comparing it to other standard methods.
Debraj GuhaThakurta discusses ML and data analysis processes in Spark using examples written in Python and R.
Gil Tene presents the current state of Java SE and OpenJDK, the role of Java in the Big Data and Infrastructure components, JCP, the ecosystem, trends, etc.
Marius Bogoevici demonstrates how to create complex data processing pipelines that bridge the big data and enterprise integration together and how to orchestrate them with Spring Cloud Data Flow.
Thomas Risberg discusses developing big data pipelines with Spring, focusing around the code needed and he also covers how to set up a test environment both locally and in the cloud.
Uses of Big Data by a Non-Profit Engaged in Conducting Events Funded in Part by Third Party Sponsors
Thomas Grilk discusses how a non-profit can efficiently use data from customers/athletes in its marketing and sponsorship activities while respecting the privacy and confidentiality of its customers.
Rajat Monga talks about why Google built TensorFlow, an open source software library for numerical computation using data flow graphs, and what were some of the technical challenges in building it.
David Fisher discusses via example how to build a data navigation language into visualizations, providing an intuitive user experience via the mechanism of subtle visual cuing.
Lawrence Chernin describes best practices and validation methods used to deal with large unstructured data, including a suite of unit tests covering the implementations of algorithmic equations.