BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Microsoft's New Simulation Framework FLUTE Accelerates Federated Learning Algorithm Development

Microsoft's New Simulation Framework FLUTE Accelerates Federated Learning Algorithm Development

This item in japanese

Microsoft Research has recently released Federated Learning Utilities and Tools for Experimentation (FLUTE), a new simulation framework to accelerate federated learning ML algorithm development. It opens a new direction in federated learning algorithm design and experimentation by allowing engineers and researchers to design and simulate new algorithms before development and deployment.

FLUTE is a simulation framework for running large-scale offline federated learning algorithms. The main goal of federated learning is to train complex machine-learning models over massive amounts of data without the need to share that data in a centralized location. In this approach, the initial global model is uploaded on each device with limited computational power capacity. Data on each device is used to provide small updates on the model. These new small changes are transferred to the centralized system for aggregation. These steps are iteratively repeated until there is no major change in the global model.

Despite the flexibility that it may bring, like the distribution of the workload, the framework poses a challenge on how to manage many moving chunks of data for training and related privacy for each end-node. FLUTE tries to address these problems by allowing researchers and developers to test and experiment with mentioned constraints like data privacy, communication strategy, and scalability before implementing and launching models in production. FLUTE integrates well with Azure ML and is based on Python and PyTorch.

The following figure shows FLUTE's high-level architecture. In the first step, the server sends the global model to the clients. Clients using the data will send the pseudo-gradient generated locally back to the server to do the aggregation. The new global model will be updated for the clients and these steps are repeated till it converges to the optimal one. The framework supports diverse federated learning configurations, including standardized implementations such as DGA and FedAvg.

Image Source: Microsoft Research Blog

FLUTE is available for the public on GitHub. It comes with the basic tools to start experimenting. The community can look at the FLUTE architecture video, more documents and the published FLUTE paper. Based on the blog post, Microsoft Research is working on algorithmic enhancements in optimization, support for additional communication protocols and easier ways to link with Azure ML for future releases.

Other federated learning frameworks for distributed training and deploying of new ML models include Tensorflow Federated and IBM federated learning.

About the Author

Rate this Article

Adoption
Style

BT