ASPIRE:Exploiting Asynchronous Parallelism in Iterative Algorithms using a Relaxed Consistency-based DSM
The authors present a relaxed memory consistency model and consistency protocol that tolerate communication latency and minimize the use of stale values, outperforming other models.
Terence Parr shows the key practical advances in parsing from the last 25 years, provides algorithm comparisons, and separates the promises from reality.
Fangjin Yang, creator of Druid, shows how approximation algorithms can help system scale out linearly and process huge amount of data quickly with small memory footprint.
Aish Fenton discusses Netflix' machine learning algorithms, including distributed Neural Networks on AWS GPUs, providing insight into offline experimentation and online AB testing.
Roger Orr solves a problem with different levels of complexity trying to answer what the complexity notation actually means and why it is important in practice.
Mark Harwood shows how anomaly detection algorithms can spot card fraud, incorrectly tagged movies and the UK's most unexpected hotspot for weapon possession.
SriSatish Ambati shares tips for in-memory algorithms, discussing I/O, S3 resets, muxers, primitive byte arrays, non-blocking structures, and fork/join queues.
Xavier Amatriain discusses the machine learning algorithms and architecture behind Netflix' recommender systems, offline experiments and online A/B testing.
Martin Thompson discusses the need to measure what’s going on at the hardware level in order to be able to create high performing lock-free algorithms.
Shawn Wallace takes a look at several problems explaining how to evaluate possible solutions and to compare with each other.