Nikhil Garg talks about the various Machine Learning problems that are important for Quora to solve in order to keep the quality high at such a massive scale.
David Xia explains how Helios testing framework drives integration tests and spins up self-contained environments during test runs, increasing Spotify’s code quality and successful deployments.
Joe Duffy shares some of his key experiences from building an entire operating system in a C# dialect and dealing with errors and concurrency robustly, focusing on open source C# and .NET.
Theo Schlossnagle talks about lessons learned in building an always-on distributed time-series database with aggressive quality of service guarantees, and techniques for dealing with bad machines.
Bryan Helmkamp discusses insights from analyzing over 1T LoC daily, what makes a code metric valuable, when unmaintainable code may be preferable, and what prevents maintaining quality code over time.
Aaron Bedra focuses on describing a system as a series of models that can be used to systematically and automatically generate input data and ensure that a code is behaving as expected.
Jonathan Graham takes a look at the Reactive Manifesto and Functional Programming from the perspective of the pharmaceutical industry and the quality of the processes used to produce drugs.
Owais Zahid talks about establishing quality requirements for products, including quality aspects in the definition of Done, and communicating goals with the development team.
Liz Keogh takes a look at why experimentation underpins everything done in technology, and why it is necessary to be able to move and change the right thing.
Hadi Michael explores the elements commonly found on developer portals, and identifies those that consistently contribute to superior developer experiences.
Roy van Rijn explains what mutation testing is and how it works, comparing several Java frameworks (PIT, Jester, Jumble) that enable automatic mutation testing in a continuous build.
Jerry Yoakum discusses how code profiling tools and techniques can be used to evaluate code for constructions and errors that are likely to cause problems, highlight places in need of refactoring.