Technique paves the way for ‘federated learning’ AI training in wireless devices


Federated learning is a great tool for training artificial intelligence (AI) systems while protecting data privacy, but the amount of data traffic involved has made it unwieldy for systems that include wireless devices. wire. A new technique uses compression to drastically reduce the size of data transmissions, creating additional opportunities for training AI on wireless technologies.

Federated learning is a form of machine learning involving multiple devices, called clients. Each of the clients is trained using different data and develops its own model to perform a specific task. Clients then send their models to a centralized server. The centralized server builds on each of these patterns to create a hybrid pattern, which performs better than any of the other patterns on their own. The central server then sends this hybrid model back to each of the clients. The whole process is then repeated, with each iteration leading to model updates that ultimately improve system performance.

“One of the benefits of federated learning is that it can allow the overall AI system to improve its performance without compromising the privacy of the data used to train the system,” says co-author Chau-Wai Wong. of an article on the new technique and assistant professor of electrical and computer engineering at North Carolina State University. “For example, you can leverage privileged patient data from multiple hospitals to improve AI diagnostic tools, without the hospitals having access to each other’s patient data.”

Many tasks could be enhanced by leveraging data stored on users’ personal devices, such as smartphones. And federated learning would be a way to use that data without compromising anyone’s privacy. However, there is a stumbling block: federated training requires a lot of communication between clients and the central server during training, as they exchange model updates. In areas with limited bandwidth or heavy data traffic, communication between clients and the centralized server can clog wireless connections, slowing the process.

“We were trying to think of a way to speed up wireless communication for federated learning, and we took inspiration from the decades of work that’s been done on video compression to develop a better way to compress data,” explains Wong.

Specifically, the researchers developed a technique that allows customers to compress data into much smaller packets. The packets are condensed before being sent, then reconstructed by the centralized server. The process is made possible by a series of algorithms developed by the research team. Using this technique, researchers were able to condense the amount of wireless data sent by customers by up to 99%. Data sent from the server to clients is not compressed.

“Our technique makes federated learning viable for wireless devices where available bandwidth is limited,” says Kai Yue, lead author of the paper and Ph.D. student at NC State. “For example, it could be used to improve the performance of many AI programs that interface with users, such as voice-enabled virtual assistants.”

The paper, “Effective federated learning in communication via predictive coding“, is published in the IEEE Journal of Selected Topics in Signal Processing. The article was co-authored by Huaiyu Dai, Professor of Electrical and Computer Engineering at NC State; and by Richeng Jin, postdoctoral researcher at NC State. The work was carried out with partial support from the National Science Foundation, under grant number 1824518.

-ship-

Note to Editors: The summary of the study follows.

“Communicatively Effective Federated Learning Through Predictive Coding”

Authors: Kai Yue, Richeng Jin, Chau-Wai Wong and Huaiyu Dai, North Carolina State University

Published: Jan 13 IEEE Journal of Selected Topics in Signal Processing

DO I: 10.1109/JSTSP.2022.3142678

Summary: Federated learning can allow remote workers to collaboratively train a shared machine learning model while allowing training data to be kept locally. In the case of using wireless mobile devices, communication overhead is a critical bottleneck due to limited power and bandwidth. Previous work has used various data compression tools such as quantization and parsimony to reduce overhead. In this paper, we propose a communication scheme based on predictive coding for federated learning. The scheme has prediction functions shared between all devices and allows each worker to transmit a compressed residual vector derived from the reference. In each communication cycle, we select the predictor and quantizer based on the rate distortion cost, and further reduce the overhead by using entropy coding. Extensive simulations reveal that communication cost can be reduced by up to 99% with better learning performance compared to other baselines.

Previous IISc scientist develops hardware that can help computers mimic brain function
Next Best bets: a quick guide to online and in-person entertainment and experiences