Search

Federated Learning: Democratized and Personalized AI, with Privacy by Design

Written by Shambhavi Mathur and Satvik Tripathi

Federated learning is a machine learning technique that trains, without exchanging, an algorithm across multiple decentralized edge devices or servers that hold local data samples. This approach contrasts with traditional centralized machine learning techniques in which all local datasets are uploaded to a single server, as well as with more traditional decentralized approaches that assume that local data samples are identically distributed.



Our data is sent to a central server where it is analyzed under the more traditional system, and the relevant information is used for altering the algorithm. Federated learning offers a solution that enhances user privacy since most personal data remain on a device of a person. Algorithms train directly on user devices and send only the relevant data summaries back, rather than the data as a whole. This allows companies to improve their algorithms without the need to collect all the data from a user.


Federated learning allows multiple actors to build a common, robust machine learning model without sharing data, thus allowing critical issues such as data privacy, data security, data access rights and access to heterogeneous data to be addressed. Its applications spread across a range of industries including defense, telecommunications, loT, pharmaceutical, etc.

The main difference between federated learning and distributed learning lies in the assumptions made about the properties of the local datasets as distributed learning originally aims to paralyze computing power where federated learning originally aims to train on heterogeneous datasets. Although distributed learning is also intended to train a single model on multiple servers, a common underlying assumption is that the local datasets are distributed identically (i.i.d.) and have roughly the same size. None of these hypotheses are made for federated learning; instead, the datasets are typically heterogeneous, and their sizes can span multiple levels. Also, customers involved in federated learning may be unreliable as they are subject to more failures or dropping out as they usually rely on less powerful communication media ( i.e. Wi-Fi) and battery-powered systems (i.e. smartphones and IoT devices) compared to distributed learning where nodes are typical data centers with powerful computing capabilities and are connected with fast networks.


Working:

Now I'll try to explain this in easy words


Federated learning relies on an iterative process that is broken up into an atomic set of client-server interactions known as a federated learning round to ensure good task performance of a final, central machine learning model. Each round of this process involves transmitting the current global model state to participating nodes, training local models on those local nodes to produce a set of potential model updates at each node, and then aggregating and processing these local updates into a single global update and applying them to the global model. A central server is used for aggregation in the methodology below, whereas local nodes do local training depending on the orders of the central server. However, other strategies lead to the same results without central servers, using gossip or Consensus methodologies in a peer-to-peer approach.


The learning procedure can be summarized as follows:


1. Initialization: according to the server inputs, a machine-learning model (e.g., linear regression, neural network, boosting) is chosen to be trained on local nodes and initialized. Then, nodes are activated and wait for the central server to give the calculation tasks.


2. Client selection: a fraction of local nodes is selected to start training on local data. The selected nodes acquire the current statistical model while the others wait for the next federated round.


3. Configuration: the central server orders selected nodes to undergo training of the model on their local data in a pre-specified fashion (e.g., for some mini-batch updates of gradient descendants).


4. Reporting: each selected node sends its local model to the server for aggregation. The central server aggregates the received models and sends back the model updates to the nodes. It also handles failures for disconnected nodes or lost model updates. The next federated round is started returning to the client selection phase.


5. Termination: once a predefined termination criterion is met (e.g., a maximum number of iterations is reached or the model accuracy is greater than a threshold) the central server aggregates the updates and finalizes the global model.


Application:


Federated learning has a wide range of potential use cases, especially in situations where privacy issues intersect with algorithms needing to improve. The most prominent federated learning projects have been carried out on smartphones at the moment, but the same techniques can be applied to computers and IoT devices, such as autonomous vehicles.


Some of the uses are:


Google Gboard- The first large-scale deployment in the real world of federated learning was as part of Google's keyboard app, Gboard. The company aimed to use the technique to improve suggestions for words without compromising the privacy of users. The algorithm's new version will offer an improved experience for what it has learned from the process, and the cycle will repeat itself. This allows users to have constantly improving keyboard suggestions, without having to compromise their privacy.


Healthcare- Inside the healthcare industry, data privacy and security are incredibly complex. Many organizations harbor significant amounts of sensitive as well as valuable patient data which hackers are also keenly looking for. The wealth of data in those repositories is tremendously useful for scams such as identity theft and fraud in insurance. Because of the large amounts of data and the enormous risks faced by the health sector, most countries have implemented strict legislation on how to manage health data, just like the HIPAA regulations in the US. In experiments that determine drug toxicity, predict disease evolution, and also estimate survival rates for rare types of cancer, Owkin is working on a platform that uses federated learning to protect patient data. In 2018, Intel partnered with the Center for Biomedical Image Computing and Analytics at the University of Pennsylvania to demonstrate how federated learning could be applied to medical imaging as a proof of concept.


Autonomous vehicles- Federated learning might be useful in large ways for self-driving vehicles. Primarily, it could protect user data as many people dislike the idea of uploading and analyzing their travel records and other driving information on a central server. Federated learning could improve user privacy by updating only the algorithms with summaries of this data, rather than all the user information. Another key reason for adopting a federated approach to learning is that it could potentially reduce latency. Traditional cloud-learning involves large transfers of data and a slower pace of learning, so federated learning could allow autonomous vehicles to act more rapidly and accurately, reducing accidents and boosting safety.


Complying with regulations- Federated learning can also help organizations improve their algorithmic models without exposing data about patients or ending up on the wrong side of the regulation. Laws such as Europe's General Data Protection Regulation ( GDPR) and 1996 the United States Health Insurance Portability Act have strict regulations on individual data and how it can be used.


Limitations:


In addition to the potential security issues, federated learning has some other limitations as well:


1) Federated learning is conducted over Wi-Fi or 4G, while traditional machine learning occurs in data centers. The bandwidth rates of Wi-Fi or 4G are magnitudes lower than those used between the working nodes and servers in these centers.


2) Lack of bandwidth could potentially cause a bottleneck that increases latency and slows down the learning process.


3) It requires significantly more local device power and memory to train the model.


What will Enable the Growth of Federated Learning?


In the coming years, the model building and calculation on the edge will make considerable progress with Federated Learning and Homomorphic Encryption. With one billion-plus smartphone equipped with AI chips and significant processing capacity, many of the ML models will be available locally on these handheld devices over the next three to five years. by delivering heavy-duty analytics and computation through smartphones "on the edge," rather than central computer facilities, the time is taken to build data products such as hyper-personalized recommendation engines, e-commerce price engines, etc. can be significantly reduced. Utilizing the advantage of accelerated application delivery and adapting more rapidly to changing customer behavior, businesses can use a distributed machine learning model building system besides drastically reduced costs.


As billion-plus smartphones being equipped with AI chips and significant computing power get shipped in the next 3–5 years, Federated learning applications will grow


This paradigm shift is an exciting opportunity for machine learning practitioners and enthusiasts to democratize AI. It also provides new directions for the implementation of new tools and, above all, a modern approach to the analysis of key ML problems.


At first, it would be a bit challenging to create, train, and test models that do not have direct access to or labeling of the raw data. However, our prediction is federated technology, especially in emerging markets, including India, where hyper-personalization and highly contextual recommendation engines will be key for driving say, app adoption, or e-commerce purchase would be a revolution. Federated learning holds a lot of potentials, which would be harnessed in the near future.


Conclusion:


Federated learning is revolutionizing how machine learning models are trained. Google has just released its first production-level federated learning platform, which will spawn many federated learning-based applications such as on-device item ranking, next-word prediction, and content suggestion. In the future, machine learning models can be trained without counting on the computing resources owned by giant AI companies. And users will not need to trade their privacy for better services.


If you want to learn more about Federated Learning, here's a talk from Google I/O'19


TECHVIK

Copyright © 2019 by Techvik.