InfoQ Homepage PAPI 2018 Content on InfoQ
-
Would You Have Clicked on What We Would Have Recommended?
Peter B. Golbus describes recent work on the offline estimation of recommender system A/B tests using counterfactual reasoning techniques.
-
Open AI for Advertisers: Discover Your Audience
Saket Mengle discusses the Open AI for Advertisers, which is a scalable, ROI positive third-party audience discovery algorithm that is meant to improve the customer acquisition effectiveness.
-
Facial Recognition Adversarial Attacks, Policy and Choice
Gretchen Greene demonstrates the technical feasibility of facial recognition adversarial attacks, describes using it at airports and borders and invites contributions to their open sourced prototype.
-
Creating Robust Interpretable NLP Systems with Attention
Alexander Wolf introduces Attention, an interpretable type of neural network layer that is loosely based on attention in human, explaining why and how it has been utilized to revolutionize NLP.
-
Monitoring AI with AI
Iskandar Sitdikov discusses a solution, tooling and architecture that allows an ML engineer to be involved in delivery phase and take ownership over deployment and monitoring of ML pipelines.
-
Migrating ML from Research to Production
Conrado Silva Miranda shares his experience leveraging research to production settings, presenting the major issues faced by developers and how to establish stable production for research.
-
Unintended Consequences of AI — Panel Discussion
The panelists discuss some of the unexpected and unintended consequences AI might have.
-
Reasoning about Uncertainty at Scale
Max Livingston presents a case study of using Bayesian modelling and inference to directly model behavior of aircraft arrivals and departures, focusing on the uncertainty in those predictions.
-
Designing Automated Pipelines for Unseen Custom Data
Kevin Moore discusses some challenges in designing automated machine learning pipelines that can deal with custom user data that it has never seen before, as well as some of Salesforce’s solutions.
-
The Right Amount of Trust for AI
Chris Butler discusses the building blocks of AI from a product/design perspective, what trust is, how trust is gained and lost, and techniques one can use to build trusted AI products.
-
Machine Learning Interpretability in the GDPR Era
Gregory Antell explores the definition of interpretability in ML, the trade-offs with complexity and performance, and surveys the major methods used to interpret and explain ML models in the GDPR era
-
Genetic Programming in the Real World: A Short Overview
Leonardo Trujillo overviews how GP can be used to solve ML tasks intended as a starting point for applied researchers and developers.