Ian Bull introduces Node4J and explores the performance characteristics and highlights the tools that help one develop, debug and deploy Node.JS applications running directly on the JVM.
Jim Webber talks about several kinds of fraud common in financial services and how each decomposes into a straightforward graph use-case. He explores them using Neo4j and Cypher query language.
Joseph Paulchell discusses the journey from batch-oriented processes using databases to a real-time data streaming solution and the significant benefits achieved as well as the challenges encountered.
Paul King reviews the features in Groovy which make it easy to work with databases - Groovy SQL, datasets -, and working with NoSQL databases such as MongoDB and Neo4J.
Christoph Strobl focuses on integrating search solutions like Solr, Elasticsearch as well as MongoDBs full text search into an application.
R Tsang shows how to create a Java-based microservice using Spring Boot, containerize it using Maven plugins and deploy a fleet of microservices and dependent components such as Redis using Kubernetes
Bob Familiar introduces microservices, discussing their architecture and outlining cloud deployment scenarios, exemplified by a live demo on Microsoft Azure.
Yan Cui shares lessons learned using Neo4j to model the in-game economy of the "Here Be Monsters" game and automate the balancing process.
Viktor Gamov covers In-Memory technology, distributed data topologies, making in-memory reliable, scalable and durable, when to use NoSQL, and techniques for Big In-Memory Data.
Christopher Meiklejohn looks at applying two techniques together, deterministic data flow programming and conflict-free replicated data types, to create highly available and fault-tolerant systems.
Howard Chu covers highlights of the LMDB design and discusses some of the internal improvements in slapd due to LMDB, as well as the impact of LMDB on other projects.
Piotr Kołaczkowski discusses how they integrated Spark with Cassandra, how it was done, how it works in practice and why it is better than using a Hadoop intermediate layer.