Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Amazon Releases Fortuna, an Open-Source Library for ML Model Uncertainty Quantification

Amazon Releases Fortuna, an Open-Source Library for ML Model Uncertainty Quantification

AWS announced that Fortuna, an open-source toolkit for ML model uncertainty quantification, has been made generally available. Any trained neural network can be used with the calibration methods offered by Fortuna, such as conformal prediction, to produce calibrated uncertainty estimates.

There are numerous documented methods for estimating or calibrating the uncertainty of predictions, however current tools and libraries for quantifying uncertainty have a limited range and do not provide a comprehensive collection of methods. This has a large overhead and makes it difficult to incorporate uncertainty into production systems. Fortuna bridges this gap by compiling well-known techniques making them accessible to users through a standardized and user-friendly interface.

Fortuna offers three different usage modes, starting from uncertainty estimates, has minimal compatibility requirements, and is the quickest level of interaction with the library. This usage mode offers conformal prediction methods for both classification and regression.

Starting from model outputs assumes an already trained model in some framework, and arrives at Fortuna with model outputs. This usage mode allows users to calibrate model outputs, estimate uncertainty, compute metrics and obtain conformal sets.

Amazon Fortuna also supports a number of Bayesian inference methods that can be applied to deep neural networks starting from Flax models. The library makes it easy to run benchmarks and will enable practitioners to build robust and reliable AI solutions by taking advantage of advanced uncertainty quantification techniques.

Another widely used library for estimating model uncertainty is scikit-learn, an open-source machine learning library for Python. It includes functions for cross-validation and bootstrapping, as well as support for building ensemble models. TensorFlow Probability also provides tools for estimating uncertainty. It is built on top of TensorFlow, and includes support for Bayesian neural networks and Monte Carlo methods and PyMC3, a library for probabilistic programming that allows users to build Bayesian models using a high-level programming interface.

Applications that call for making critical decisions depend on an accurate evaluation of the expected uncertainty. When there is ambiguity, it is possible to judge the precision of model predictions, defer to human judgment, or determine if a model may be utilized securely.

About the Author

Rate this Article