Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Gnnpapers | 13,979 | 5 months ago | 10 | |||||||
Must-read papers on graph neural networks (GNN) | ||||||||||
Graphsage | 2,903 | 7 months ago | 111 | other | Python | |||||
Representation learning on large graphs using stochastic graph convolutions. | ||||||||||
Deepwalk | 2,513 | 7 | 5 months ago | 4 | April 29, 2018 | 42 | other | Python | ||
DeepWalk - Deep Learning for Graphs | ||||||||||
Literaturedl4graph | 2,456 | 3 years ago | 4 | mit | ||||||
A comprehensive collection of recent papers on graph deep learning | ||||||||||
Node2vec | 2,377 | a year ago | 95 | mit | Scala | |||||
Awesome Network Embedding | 2,218 | 2 years ago | 3 | |||||||
A curated list of network embedding techniques. | ||||||||||
Sg2im | 1,211 | a year ago | 11 | apache-2.0 | Python | |||||
Code for "Image Generation from Scene Graphs", Johnson et al, CVPR 2018 | ||||||||||
Awesome Self Supervised Gnn | 1,116 | 23 days ago | Python | |||||||
Papers about pretraining and self-supervised learning on Graph Neural Networks (GNN). | ||||||||||
Graph Fraud Detection Papers | 957 | 19 days ago | 1 | |||||||
A curated list of fraud detection papers using graph information or graph neural networks | ||||||||||
Gnn4nlp Papers | 824 | a month ago | mit | |||||||
A list of recent papers about Graph Neural Network methods applied in NLP areas. |
Table of Contents
categories
keywords
Statistics: 🔥 code is available & stars >= 100 | ⭐️ citation >= 50 | 🎓 Top-tier venue
kg.
: Knowledge Graph| data.
: dataset | surv.
: survey
This section partially refers to DBLP search engine and repositories Awesome-Federated-Learning-on-Graph-and-GNN-papers and Awesome-Federated-Machine-Learning.
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Personalized Subgraph Federated Learning | KAIST | ICML 🎓 | 2023 | FED-PUB^FED-PUB | [PDF] |
Semi-decentralized Federated Ego Graph Learning for Recommendation | SUST | WWW🎓 | 2023 | [PUB] [PDF] | |
Federated Graph Neural Network for Fast Anomaly Detection in Controller Area Networks | ECUST | IEEE Trans. Inf. Forensics Secur. 🎓 | 2023 | [PUB] | |
Federated Learning Over Coupled Graphs | XJTU | IEEE Trans. Parallel Distributed Syst. 🎓 | 2023 | [PUB] [PDF] | |
HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning | Nankai University | IEEE Trans. Vis. Comput. Graph. 🎓 | 2023 | HetVis[^HetVis] | [PUB] [PDF] |
Federated Learning on Non-IID Graphs via Structural Knowledge Sharing | UTS | AAAI 🎓 | 2023 | FedStar[^FedStar] | [PDF] [CODE] |
FedGS: Federated Graph-based Sampling with Arbitrary Client Availability | XMU | AAAI 🎓 | 2023 | FedGS[^FedGS] | [PDF] [CODE] |
Short-Term Traffic Flow Prediction Based on Graph Convolutional Networks and Federated Learning | ZUEL | IEEE Trans. Intell. Transp. Syst. | 2023 | [PUB] | |
Hyper-Graph Attention Based Federated Learning Methods for Use in Mental Health Detection. | HVL | IEEE J. Biomed. Health Informatics | 2023 | [PUB] | |
Federated Learning-Based Cross-Enterprise Recommendation With Graph Neural | IEEE Trans. Ind. Informatics | 2023 | FL-GMT[^FL-GMT] | [PUB] | |
FedGR: Federated Graph Neural Network for Recommendation System | CUPT | Axioms | 2023 | [PUB] | |
GDFed: Dynamic Federated Learning for Heterogenous Device Using Graph Neural Network | KHU | ICOIN | 2023 | [PUB] [CODE] | |
Coordinated Scheduling and Decentralized Federated Learning Using Conflict Clustering Graphs in Fog-Assisted IoD Networks | UBC | IEEE Trans. Veh. Technol. | 2023 | [PUB] | |
FedWalk: Communication Efficient Federated Unsupervised Node Embedding with Differential Privacy | SJTU | KDD 🎓 | 2022 | FedWalk[^FedWalk] | [PUB] [PDF] |
FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient Platform for Federated Graph Learning 🔥 | Alibaba | KDD (Best Paper Award) 🎓 | 2022 | FederatedScope-GNN[^FederatedScope-GNN] | [PDF] [CODE] [PUB] |
Deep Neural Network Fusion via Graph Matching with Applications to Model Ensemble and Federated Learning | SJTU | ICML 🎓 | 2022 | GAMF[^GAMF] | [PUB] [CODE] |
Meta-Learning Based Knowledge Extrapolation for Knowledge Graphs in the Federated Setting kg.
|
ZJU | IJCAI 🎓 | 2022 | MaKEr[^MaKEr] | [PUB] [PDF] [CODE] |
Personalized Federated Learning With a Graph | UTS | IJCAI 🎓 | 2022 | SFL[^SFL] | [PUB] [PDF] [CODE] |
Vertically Federated Graph Neural Network for Privacy-Preserving Node Classification | ZJU | IJCAI 🎓 | 2022 | VFGNN[^VFGNN] | [PUB] [PDF] |
SpreadGNN: Decentralized Multi-Task Federated Learning for Graph Neural Networks on Molecular Data | USC | AAAI🎓 | 2022 | SpreadGNN[^SpreadGNN] | [PUB] [PDF] [CODE] [] |
FedGraph: Federated Graph Learning with Intelligent Sampling | UoA | TPDS 🎓 | 2022 | FedGraph[^FedGraph] | [PUB] [CODE] [] |
Federated Graph Machine Learning: A Survey of Concepts, Techniques, and Applications surv.
|
University of Virginia | SIGKDD Explor. | 2022 | FGML[^FGML] | [PUB] [PDF] |
Semantic Vectorization: Text- and Graph-Based Models. | IBM Research | Federated Learning | 2022 | [PUB] | |
GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs | IIT | ICDM | 2022 | GraphFL[^GraphFL] | [PUB] [PDF] [] |
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks | TU Delft | ACSAC | 2022 | [PUB] [PDF] | |
FedNI: Federated Graph Learning with Network Inpainting for Population-Based Disease Prediction | UESTC | TMI | 2022 | FedNI[^FedNI] | [PUB] [PDF] |
SemiGraphFL: Semi-supervised Graph Federated Learning for Graph Classification. | PKU | PPSN | 2022 | SemiGraphFL[^SemiGraphFL] | [PUB] |
Federated Spatio-Temporal Traffic Flow Prediction Based on Graph Convolutional Network | TJU | WCSP | 2022 | [PUB] | |
A federated graph neural network framework for privacy-preserving personalization | THU | Nature Communications | 2022 | FedPerGNN[^FedPerGNN] | [PUB] [CODE] [] |
Malicious Transaction Identification in Digital Currency via Federated Graph Deep Learning | BIT | INFOCOM Workshops | 2022 | GraphSniffer[^GraphSniffer] | [PUB] |
Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation kg.
|
Lehigh University | EMNLP | 2022 | FedR[^FedR] | [PUB] [PDF] [CODE] |
Power Allocation for Wireless Federated Learning using Graph Neural Networks | Rice University | ICASSP | 2022 | wirelessfl-pdgnet[^wirelessfl-pdgnet] | [PUB] [PDF] [CODE] |
Privacy-Preserving Federated Multi-Task Linear Regression: A One-Shot Linear Mixing Approach Inspired By Graph Regularization | UC | ICASSP | 2022 | multitask-fusion[^multitask-fusion] | [PUB] [PDF] [CODE] |
Graph-regularized federated learning with shareable side information | NWPU | Knowl. Based Syst. | 2022 | [PUB] | |
Federated knowledge graph completion via embedding-contrastive learning kg.
|
ZJU | Knowl. Based Syst. | 2022 | FedEC[^FedEC] | [PUB] |
Federated Graph Learning with Periodic Neighbour Sampling | HKU | IWQoS | 2022 | PNS-FGL[^PNS-FGL] | [PUB] |
FedGSL: Federated Graph Structure Learning for Local Subgraph Augmentation. | Big Data | 2022 | [PUB] | ||
Domain-Aware Federated Social Bot Detection with Multi-Relational Graph Neural Networks. | UCAS; CAS | IJCNN | 2022 | DA-MRG[^DA-MRG] | [PUB] |
A Privacy-Preserving Subgraph-Level Federated Graph Neural Network via Differential Privacy | Ping An Technology | KSEM | 2022 | DP-FedRec[^DP-FedRec] | [PUB] [PDF] |
Clustered Graph Federated Personalized Learning. | NTNU | IEEECONF | 2022 | [PUB] | |
FedGCN: Convergence and Communication Tradeoffs in Federated Training of Graph Convolutional Networks | CMU | CIKM Workshop (Oral) | 2022 | FedGCN[^FedGCN] | [PDF] [CODE] |
Investigating the Predictive Reproducibility of Federated Graph Neural Networks using Medical Datasets. | MICCAI Workshop | 2022 | [PDF] [CODE] | ||
Peer-to-Peer Variational Federated Learning Over Arbitrary Graphs | UCSD | Int. J. Bio Inspired Comput. | 2022 | [PUB] | |
Federated Multi-task Graph Learning | ZJU | ACM Trans. Intell. Syst. Technol. | 2022 | [PUB] | |
Graph-Based Traffic Forecasting via Communication-Efficient Federated Learning | SUSTech | WCNC | 2022 | CTFL[^CTFL] | [PUB] |
Federated meta-learning for spatial-temporal prediction | NEU | Neural Comput. Appl. | 2022 | FML-ST[^FML-ST] | [PUB] [CODE] |
BiG-Fed: Bilevel Optimization Enhanced Graph-Aided Federated Learning | NTU | IEEE Transactions on Big Data | 2022 | BiG-Fed[^BiG-Fed] | [PUB] [PDF] |
Leveraging Spanning Tree to Detect Colluding Attackers in Federated Learning | Missouri S&T | INFCOM Workshops | 2022 | FL-ST[^FL-ST] | [PUB] |
Federated learning of molecular properties with graph neural networks in a heterogeneous setting | University of Rochester | Patterns | 2022 | FLIT+[^FLITplus] | [PUB] [PDF] [CODE] |
Graph Federated Learning for CIoT Devices in Smart Home Applications | University of Toronto | IEEE Internet Things J. | 2022 | [PUB] [PDF] [CODE] | |
Multi-Level Federated Graph Learning and Self-Attention Based Personalized Wi-Fi Indoor Fingerprint Localization | SYSU | IEEE Commun. Lett. | 2022 | ML-FGL[^ML-FGL] | [PUB] |
Graph-Assisted Communication-Efficient Ensemble Federated Learning | UC | EUSIPCO | 2022 | [PUB] [PDF] | |
Decentralized Graph Federated Multitask Learning for Streaming Data | NTNU | CISS | 2022 | PSO-GFML[^PSO-GFML] | [PUB] |
Neural graph collaborative filtering for privacy preservation based on federated transfer learning | Electron. Libr. | 2022 | FTL-NGCF[^FTL-NGCF] | [PUB] | |
Dynamic Neural Graphs Based Federated Reptile for Semi-Supervised Multi-Tasking in Healthcare Applications | Oxford | JBHI | 2022 | DNG-FR[^DNG-FR] | [PUB] |
FedGCN: Federated Learning-Based Graph Convolutional Networks for Non-Euclidean Spatial Data | NUIST | Mathematics | 2022 | FedGCN-NES[^FedGCN-NES] | [PUB] |
Federated Dynamic Graph Neural Networks with Secure Aggregation for Video-based Distributed Surveillance | ND | ACM Trans. Intell. Syst. Technol. | 2022 | Feddy[^Feddy] | [PUB] [PDF] [] |
Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation. | Purdue | INFOCOM 🎓 | 2021 | D2D-FedL[^D2D-FedL] | [PUB] [PDF] |
Federated Graph Classification over Non-IID Graphs | Emory | NeurIPS 🎓 | 2021 | GCFL[^GCFL] | [PUB] [PDF] [CODE] [] |
Subgraph Federated Learning with Missing Neighbor Generation | Emory; UBC; Lehigh University | NeurIPS 🎓 | 2021 | FedSage[^FedSage] | [PUB] [PDF] |
Cross-Node Federated Graph Neural Network for Spatio-Temporal Data Modeling | USC | KDD 🎓 | 2021 | CNFGNN[^CNFGNN] | [PUB] [PDF] [CODE] [] |
Differentially Private Federated Knowledge Graphs Embedding kg.
|
BUAA | CIKM | 2021 | FKGE[^FKGE] | [PUB] [PDF] [CODE] [] |
Decentralized Federated Graph Neural Networks | Blue Elephant Tech | IJCAI Workshop | 2021 | D-FedGNN[^D-FedGNN] | [PDF] |
FedSGC: Federated Simple Graph Convolution for Node Classification | HKUST | IJCAI Workshop | 2021 | FedSGC[^FedSGC] | [PDF] |
FL-DISCO: Federated Generative Adversarial Network for Graph-based Molecule Drug Discovery: Special Session Paper | UNM | ICCAD | 2021 | FL-DISCO[^FL-DISCO] | [PUB] |
FASTGNN: A Topological Information Protected Federated Learning Approach for Traffic Speed Forecasting | UTS | IEEE Trans. Ind. Informatics | 2021 | FASTGNN[^FASTGNN] | [PUB] |
DAG-FL: Direct Acyclic Graph-based Blockchain Empowers On-Device Federated Learning | BUPT; UESTC | ICC | 2021 | DAG-FL[^DAG-FL] | [PUB] [PDF] |
FedE: Embedding Knowledge Graphs in Federated Setting kg.
|
ZJU | IJCKG | 2021 | FedE[^FedE] | [PUB] [PDF] [CODE] |
Federated Knowledge Graph Embeddings with Heterogeneous Data kg.
|
TJU | CCKS | 2021 | FKE[^FKE] | [PUB] |
A Graph Federated Architecture with Privacy Preserving Learning | EPFL | SPAWC | 2021 | GFL[^GFL] | [PUB] [PDF] [] |
Federated Social Recommendation with Graph Neural Network | UIC | ACM TIST | 2021 | FeSoG[^FeSoG] | [PUB] [PDF] [CODE] |
FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks 🔥 surv.
|
USC | ICLR Workshop / MLSys Workshop | 2021 | FedGraphNN[^FedGraphNN] | [PDF] [CODE] [] |
A Federated Multigraph Integration Approach for Connectional Brain Template Learning | Istanbul Technical University | MICCAI Workshop | 2021 | Fed-CBT[^Fed-CBT] | [PUB] [CODE] |
Cluster-driven Graph Federated Learning over Multiple Domains | Politecnico di Torino | CVPR Workshop | 2021 | FedCG-MD[^FedCG-MD] | [PDF] [] |
FedGNN: Federated Graph Neural Network for Privacy-Preserving Recommendation | THU | ICML workshop | 2021 | FedGNN[^FedGNN] | [PDF] [] |
Decentralized federated learning of deep neural networks on non-iid data | RISE; Chalmers University of Technology | ICML workshop | 2021 | DFL-PENS[^DFL-PENS] | [PDF] [CODE] |
Glint: Decentralized Federated Graph Learning with Traffic Throttling and Flow Scheduling | The University of Aizu | IWQoS | 2021 | Glint[^Glint] | [PUB] |
Federated Graph Neural Network for Cross-graph Node Classification | BUPT | CCIS | 2021 | FGNN[^FGNN] | [PUB] |
GraFeHTy: Graph Neural Network using Federated Learning for Human Activity Recognition | Lead Data Scientist Ericsson Digital Services | ICMLA | 2021 | GraFeHTy[^GraFeHTy] | [PUB] |
Distributed Training of Graph Convolutional Networks | Sapienza University of Rome | TSIPN | 2021 | D-GCN[^D-GCN] | [PUB] [PDF] [] |
Decentralized federated learning for electronic health records | UMN | NeurIPS Workshop / CISS | 2020 | FL-DSGD[^FL-DSGD] | [PUB] [PDF] [] |
ASFGNN: Automated Separated-Federated Graph Neural Network | Ant Group | PPNA | 2020 | ASFGNN[^ASFGNN] | [PUB] [PDF] [] |
Decentralized federated learning via sgd over wireless d2d networks | SZU | SPAWC | 2020 | DSGD[^DSGD] | [PUB] [PDF] |
SGNN: A Graph Neural Network Based Federated Learning Approach by Hiding Structure | SDU | BigData | 2019 | SGNN[^SGNN] | [PUB] [PDF] |
Towards Federated Graph Learning for Collaborative Financial Crimes Detection | IBM | NeurIPS Workshop | 2019 | FGL-DFC[^FGL-DFC] | [PDF] |
Federated learning of predictive models from federated Electronic Health Records ⭐️ | BU | Int. J. Medical Informatics | 2018 | cPDS[^cPDS] | [PUB] |
GLASU: A Communication-Efficient Algorithm for Federated Learning with Vertically Distributed Graph Data | preprint | 2023 | [PDF] | ||
Vertical Federated Graph Neural Network for Recommender System | preprint | 2023 | [PDF] [CODE] | ||
Lumos: Heterogeneity-aware Federated Graph Learning over Decentralized Devices | preprint | 2023 | [PDF] | ||
Securing IoT Communication using Physical Sensor Data - Graph Layer Security with Federated Multi-Agent Deep Reinforcement Learning. | preprint | 2023 | [PDF] | ||
Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning. | preprint | 2023 | [PDF] | ||
Uplink Scheduling in Federated Learning: an Importance-Aware Approach via Graph Representation Learning | preprint | 2023 | [PDF] | ||
Graph Federated Learning with Hidden Representation Sharing | UCLA | preprint | 2022 | GFL-APPNP[^GFL-APPNP] | [PDF] |
FedRule: Federated Rule Recommendation System with Graph Neural Networks | CMU | preprint | 2022 | FedRule[^FedRule] | [PDF] |
M3FGM:a node masking and multi-granularity message passing-based federated graph model for spatial-temporal data prediction | Xidian University | preprint | 2022 | M3FGM[^M3FGM] | [PDF] |
Federated Graph-based Networks with Shared Embedding | BUCEA | preprint | 2022 | [PDF] | |
Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph | Lancaster University | preprint | 2022 | [PDF] | |
Heterogeneous Federated Learning on a Graph. | preprint | 2022 | [PDF] | ||
FedEgo: Privacy-preserving Personalized Federated Graph Learning with Ego-graphs | SYSU | preprint | 2022 | FedEgo[^FedEgo] | [PDF] [CODE] |
Federated Graph Contrastive Learning | UTS | preprint | 2022 | FGCL[^FGCL] | [PDF] |
FD-GATDR: A Federated-Decentralized-Learning Graph Attention Network for Doctor Recommendation Using EHR | preprint | 2022 | FD-GATDR[^FD-GATDR] | [PDF] | |
Privacy-preserving Graph Analytics: Secure Generation and Federated Learning | preprint | 2022 | [PDF] | ||
Federated Graph Attention Network for Rumor Detection | preprint | 2022 | [PDF] [CODE] | ||
FedRel: An Adaptive Federated Relevance Framework for Spatial Temporal Graph Learning | preprint | 2022 | [PDF] | ||
Privatized Graph Federated Learning | preprint | 2022 | [PDF] | ||
Federated Graph Neural Networks: Overview, Techniques and Challenges surv.
|
preprint | 2022 | [PDF] | ||
Decentralized event-triggered federated learning with heterogeneous communication thresholds. | preprint | 2022 | EF-HC[^EF-HC] | [PDF] | |
Federated Learning with Heterogeneous Architectures using Graph HyperNetworks | preprint | 2022 | [PDF] | ||
STFL: A Temporal-Spatial Federated Learning Framework for Graph Neural Networks | preprint | 2021 | [PDF] [CODE] | ||
Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning | preprint | 2021 | [PDF] [CODE] | ||
PPSGCN: A Privacy-Preserving Subgraph Sampling Based Distributed GCN Training Method | preprint | 2021 | PPSGCN[^PPSGCN] | [PDF] | |
Leveraging a Federation of Knowledge Graphs to Improve Faceted Search in Digital Libraries kg.
|
preprint | 2021 | [PDF] | ||
Federated Myopic Community Detection with One-shot Communication | preprint | 2021 | [PDF] | ||
Federated Graph Learning -- A Position Paper surv.
|
preprint | 2021 | [PDF] | ||
A Vertical Federated Learning Framework for Graph Convolutional Network | preprint | 2021 | FedVGCN[^FedVGCN] | [PDF] | |
FedGL: Federated Graph Learning Framework with Global Self-Supervision | preprint | 2021 | FedGL[^FedGL] | [PDF] | |
FL-AGCNS: Federated Learning Framework for Automatic Graph Convolutional Network Search | preprint | 2021 | FL-AGCNS[^FL-AGCNS] | [PDF] | |
Towards On-Device Federated Learning: A Direct Acyclic Graph-based Blockchain Approach | preprint | 2021 | [PDF] | ||
A New Look and Convergence Rate of Federated Multi-Task Learning with Laplacian Regularization | preprint | 2021 | dFedU[^dFedU] | [PDF] [CODE] | |
Improving Federated Relational Data Modeling via Basis Alignment and Weight Penalty kg.
|
preprint | 2020 | FedAlign-KG[^FedAlign-KG] | [PDF] | |
GraphFederator: Federated Visual Analysis for Multi-party Graphs | preprint | 2020 | [PDF] | ||
Privacy-Preserving Graph Neural Network for Node Classification | preprint | 2020 | [PDF] | ||
Peer-to-peer federated learning on graphs | UC | preprint | 2019 | P2P-FLG[^P2P-FLG] | [PDF] [] |
This section refers to DBLP search engine.
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
SGBoost: An Efficient and Privacy-Preserving Vertical Federated Tree Boosting Framework | Xidian University | IEEE Trans. Inf. Forensics Secur. 🎓 | 2023 | SGBoost[^SGBoost] | [PUB] [CODE] |
Incentive-boosted Federated Crowdsourcing | SDU | AAAI 🎓 | 2023 | iFedCrowd[^iFedCrowd] | [PDF] |
Explaining predictions and attacks in federated learning via random forests | Universitat Rovira i Virgili | Appl. Intell. | 2023 | [PUB] [CODE] | |
Boosting Accuracy of Differentially Private Federated Learning in Industrial IoT With Sparse Responses | IEEE Trans. Ind. Informatics | 2023 | [PUB] | ||
HT-Fed-GAN: Federated Generative Model for Decentralized Tabular Data Synthesis | HIT | Entropy | 2023 | [PUB] | |
Blockchain-Based Swarm Learning for the Mitigation of Gradient Leakage in Federated Learning | University of Udine | IEEE Access | 2023 | [PUB] | |
OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization | ZJU | Proc. VLDB Endow. 🎓 | 2022 | OpBoost[^OpBoost] | [PUB] [PDF] [CODE] |
RevFRF: Enabling Cross-Domain Random Forest Training With Revocable Federated Learning | XIDIAN UNIVERSITY | IEEE Trans. Dependable Secur. Comput. 🎓 | 2022 | RevFRF[^RevFRF] | [PUB] [PDF] |
A Tree-based Model Averaging Approach for Personalized Treatment Effect Estimation from Heterogeneous Data Sources | University of Pittsburgh | ICML 🎓 | 2022 | [PUB] [PDF] [CODE] | |
Federated Boosted Decision Trees with Differential Privacy | University of Warwick | CCS 🎓 | 2022 | [PUB] [PDF] [CODE] | |
Federated Functional Gradient Boosting | University of Pennsylvania | AISTATS 🎓 | 2022 | FFGB[^FFGB] | [PUB] [PDF] [CODE] |
Tree-Based Models for Federated Learning Systems. | IBM Research | Federated Learning | 2022 | [PUB] | |
Federated Learning for Tabular Data using TabNet: A Vehicular Use-Case | ICCP | 2022 | [PUB] | ||
Federated Learning for Tabular Data: Exploring Potential Risk to Privacy | Newcastle University | ISSRE | 2022 | [PDF] | |
Federated Random Forests can improve local performance of predictive models for various healthcare applications | University of Marburg | Bioinform. | 2022 | FRF[^FRF] | [PUB] [CODE] |
Boosting the Federation: Cross-Silo Federated Learning without Gradient Descent. | unito | IJCNN | 2022 | federation-boosting[^federation-boosting] | [PUB] [CODE] |
Federated Forest | JD | TBD | 2022 | FF[^FF] | [PUB] [PDF] |
Neural gradient boosting in federated learning for hemodynamic instability prediction: towards a distributed and scalable deep learning-based solution. | AMIA | 2022 | [PUB] | ||
Fed-GBM: a cost-effective federated gradient boosting tree for non-intrusive load monitoring | The University of Sydney | e-Energy | 2022 | Fed-GBM[^Fed-GBM] | [PUB] |
Verifiable Privacy-Preserving Scheme Based on Vertical Federated Random Forest | NUST | IEEE Internet Things J. | 2022 | VPRF[^VPRF] | [PUB] |
Statistical Detection of Adversarial examples in Blockchain-based Federated Forest In-vehicle Network Intrusion Detection Systems | CNU | IEEE Access | 2022 | [PUB] [PDF] | |
BOFRF: A Novel Boosting-Based Federated Random Forest Algorithm on Horizontally Partitioned Data | METU | IEEE Access | 2022 | BOFRF[^BOFRF] | [PUB] |
eFL-Boost: Efficient Federated Learning for Gradient Boosting Decision Trees | kobe-u | IEEE Access | 2022 | eFL-Boost[^eFL-Boost] | [PUB] |
An Efficient Learning Framework for Federated XGBoost Using Secret Sharing and Distributed Optimization | TJU | ACM Trans. Intell. Syst. Technol. | 2022 | MP-FedXGB[^MP-FedXGB] | [PUB] [PDF] [CODE] |
An optional splitting extraction based gain-AUPRC balanced strategy in federated XGBoost for mitigating imbalanced credit card fraud detection | Swinburne University of Technology | Int. J. Bio Inspired Comput. | 2022 | [PUB] | |
Random Forest Based on Federated Learning for Intrusion Detection | Malardalen University | AIAI | 2022 | FL-RF[^FL-RF] | [PUB] |
Cross-silo federated learning based decision trees | ETH Zrich | SAC | 2022 | FL-DT[^FL-DT] | [PUB] |
Leveraging Spanning Tree to Detect Colluding Attackers in Federated Learning | Missouri S&T | INFCOM Workshops | 2022 | FL-ST[^FL-ST] | [PUB] |
VF2Boost: Very Fast Vertical Federated Gradient Boosting for Cross-Enterprise Learning | PKU | SIGMOD 🎓 | 2021 | VF2Boost[^VF2Boost] | [PUB] |
Boosting with Multiple Sources | NeurIPS🎓 | 2021 | [PUB] | ||
SecureBoost: A Lossless Federated Learning Framework 🔥 | UC | IEEE Intell. Syst. | 2021 | SecureBoost[^SecureBoost] | [PUB] [PDF] [SLIDE] [CODE] [] [UC] |
A Blockchain-Based Federated Forest for SDN-Enabled In-Vehicle Network Intrusion Detection System | CNU | IEEE Access | 2021 | BFF-IDS[^BFF-IDS] | [PUB] |
Research on privacy protection of multi source data based on improved gbdt federated ensemble method with different metrics | NCUT | Phys. Commun. | 2021 | I-GBDT[^I-GBDT] | [PUB] |
Fed-EINI: An Efficient and Interpretable Inference Framework for Decision Tree Ensembles in Vertical Federated Learning | UCAS; CAS | IEEE BigData | 2021 | Fed-EINI[^Fed-EINI] | [PUB] [PDF] |
Gradient Boosting Forest: a Two-Stage Ensemble Method Enabling Federated Learning of GBDTs | THU | ICONIP | 2021 | GBF-Cen[^GBF-Cen] | [PUB] |
A k-Anonymised Federated Learning Framework with Decision Trees | Ume University | DPM/CBT @ESORICS | 2021 | KA-FL[^KA-FL] | [PUB] |
AF-DNDF: Asynchronous Federated Learning of Deep Neural Decision Forests | Chalmers | SEAA | 2021 | AF-DNDF[^AF-DNDF] | [PUB] |
Compression Boosts Differentially Private Federated Learning | Univ. Grenoble Alpes | EuroS&P | 2021 | CB-DP[^CB-DP] | [PUB] [PDF] |
Practical Federated Gradient Boosting Decision Trees | NUS; UWA | AAAI 🎓 | 2020 | SimFL[^SimFL] | [PUB] [PDF] [CODE] |
Privacy Preserving Vertical Federated Learning for Tree-based Models | NUS | VLDB 🎓 | 2020 | Pivot-DT[^Pivot-DT] | [PUB] [PDF] [VIDEO] [CODE] |
Boosting Privately: Federated Extreme Gradient Boosting for Mobile Crowdsensing | Xidian University | ICDCS | 2020 | FEDXGB[^FEDXGB] | [PUB] [PDF] |
FedCluster: Boosting the Convergence of Federated Learning via Cluster-Cycling | University of Utah | IEEE BigData | 2020 | FedCluster[^FedCluster] | [PUB] [PDF] |
New Approaches to Federated XGBoost Learning for Privacy-Preserving Data Analysis | kobe-u | ICONIP | 2020 | FL-XGBoost[^FL-XGBoost] | [PUB] |
Bandwidth Slicing to Boost Federated Learning Over Passive Optical Networks | Chalmers University of Technology | IEEE Communications Letters | 2020 | FL-PON[^FL-PON] | [PUB] |
DFedForest: Decentralized Federated Forest | UFRJ | Blockchain | 2020 | DFedForest[^DFedForest] | [PUB] |
Straggler Remission for Federated Learning via Decentralized Redundant Cayley Tree | Stevens Institute of Technology | LATINCOM | 2020 | DRC-tree[^DRC-tree] | [PUB] |
Federated Soft Gradient Boosting Machine for Streaming Data | Sinovation Ventures AI Institute | Federated Learning | 2020 | Fed-sGBM[^Fed-sGBM] | [PUB] [] |
Federated Learning of Deep Neural Decision Forests | Fraunhofer-Chalmers Centre | LOD | 2019 | FL-DNDF[^FL-DNDF] | [PUB] |
GTV: Generating Tabular Data via Vertical Federated Learning | preprint | 2023 | [PDF] | ||
Federated Survival Forests | preprint | 2023 | [PDF] | ||
Fed-TDA: Federated Tabular Data Augmentation on Non-IID Data | HIT | preprint | 2022 | Fed-TDA[^Fed-TDA] | [PDF] |
Data Leakage in Tabular Federated Learning | ETH Zurich | preprint | 2022 | TabLeak[^TabLeak] | [PDF] |
Boost Decentralized Federated Learning in Vehicular Networks by Diversifying Data Sources | preprint | 2022 | [PDF] | ||
Federated XGBoost on Sample-Wise Non-IID Data | preprint | 2022 | [PDF] | ||
Hercules: Boosting the Performance of Privacy-preserving Federated Learning | preprint | 2022 | Hercules[^Hercules] | [PDF] | |
FedGBF: An efficient vertical federated learning framework via gradient boosting and bagging | preprint | 2022 | FedGBF[^FedGBF] | [PDF] | |
A Fair and Efficient Hybrid Federated Learning Framework based on XGBoost for Distributed Power Prediction. | THU | preprint | 2022 | HFL-XGBoost[^HFL-XGBoost] | [PDF] |
An Efficient and Robust System for Vertically Federated Random Forest | preprint | 2022 | [PDF] | ||
Efficient Batch Homomorphic Encryption for Vertically Federated XGBoost. | BUAA | preprint | 2021 | EBHE-VFXGB[^EBHE-VFXGB] | [PDF] |
Guess what? You can boost Federated Learning for free | preprint | 2021 | [PDF] | ||
SecureBoost+ : A High Performance Gradient Boosting Tree Framework for Large Scale Vertical Federated Learning 🔥 | preprint | 2021 | SecureBoost+[^SecureBoostplus] | [PDF] [CODE] | |
Fed-TGAN: Federated Learning Framework for Synthesizing Tabular Data | preprint | 2021 | Fed-TGAN[^Fed-TGAN] | [PDF] | |
FedXGBoost: Privacy-Preserving XGBoost for Federated Learning | TUM | preprint | 2021 | FedXGBoost[^FedXGBoost] | [PDF] |
Adaptive Histogram-Based Gradient Boosted Trees for Federated Learning | preprint | 2020 | [PDF] | ||
FederBoost: Private Federated Learning for GBDT | ZJU | preprint | 2020 | FederBoost[^FederBoost] | [PDF] |
Privacy Preserving Text Recognition with Gradient-Boosting for Federated Learning | preprint | 2020 | [PDF] [CODE] | ||
Cloud-based Federated Boosting for Mobile Crowdsensing | preprint | 2020 | [ARXIV] | ||
Federated Extra-Trees with Privacy Preserving | preprint | 2020 | [PDF] | ||
Bandwidth Slicing to Boost Federated Learning in Edge Computing | preprint | 2019 | [PDF] | ||
Revocable Federated Learning: A Benchmark of Federated Forest | preprint | 2019 | [PDF] | ||
The Tradeoff Between Privacy and Accuracy in Anomaly Detection Using Federated XGBoost | CUHK | preprint | 2019 | F-XGBoost[^F-XGBoost] | [PDF] [CODE] |
List of papers in the field of federated learning in Nature(and its sub-journals), Cell, Science(and Science Advances) and PANS refers to WOS search engine.
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Federated machine learning in data-protection-compliant research | University of Hamburg | Nat. Mach. Intell.(Comment) | 2023 | [PUB] | |
Federated learning for predicting histological response to neoadjuvant chemotherapy in triple-negative breast cancer | Owkin | Nat. Med. | 2023 | [PUB] [CODE] | |
Federated learning enables big data for rare cancer boundary detection | University of Pennsylvania | Nat. Commun. | 2022 | [PUB] [PDF] [CODE] | |
Federated learning and Indigenous genomic data sovereignty | Hugging Face | Nat. Mach. Intell. (Comment) | 2022 | [PUB] | |
Federated disentangled representation learning for unsupervised brain anomaly detection | TUM | Nat. Mach. Intell. | 2022 | FedDis[^FedDis] | [PUB] [PDF] [CODE] |
Shifting machine learning for healthcare from development to deployment and from models to data | Nat. Biomed. Eng. (Review Article) | 2022 | FL-healthy[^FL-healthy] | [PUB] | |
A federated graph neural network framework for privacy-preserving personalization | THU | Nat. Commun. | 2022 | FedPerGNN[^FedPerGNN] | [PUB] [CODE] [] |
Communication-efficient federated learning via knowledge distillation | Nat. Commun. | 2022 | [PUB] [PDF] [CODE] | ||
Lead federated neuromorphic learning for wireless edge artificial intelligence | Nat. Commun. | 2022 | [PUB] [CODE] [] | ||
Advancing COVID-19 diagnosis with privacy-preserving collaboration in artificial intelligence | Nat. Mach. Intell. | 2021 | [PUB] [PDF] [CODE] | ||
Federated learning for predicting clinical outcomes in patients with COVID-19 | Nat. Med. | 2021 | [PUB] [CODE] | ||
Adversarial interference and its mitigations in privacy-preserving collaborative machine learning | Nat. Mach. Intell.(Perspective) | 2021 | [PUB] | ||
Swarm Learning for decentralized and confidential clinical machine learning ⭐️ | Nature 🎓 | 2021 | [PUB] [CODE] [SOFTWARE] [] | ||
End-to-end privacy preserving deep learning on multi-institutional medical imaging | Nat. Mach. Intell. | 2021 | [PUB] [CODE] [] | ||
Communication-efficient federated learning | PANS. | 2021 | [PUB] [CODE] | ||
Breaking medical data sharing boundaries by using synthesized radiographs | Science. Advances. | 2020 | [PUB] [CODE] | ||
Secure, privacy-preserving and federated machine learning in medical imaging ⭐️ | Nat. Mach. Intell.(Perspective) | 2020 | [PUB] |
In this section, we will summarize Federated Learning papers accepted by top AI(Artificial Intelligence) conference and journal, Including IJCAI(International Joint Conference on Artificial Intelligence), AAAI(AAAI Conference on Artificial Intelligence), AISTATS(Artificial Intelligence and Statistics), AI(Artificial Intelligence).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Federated Learning on Non-IID Graphs via Structural Knowledge Sharing | UTS | AAAI | 2023 | FedStar[^FedStar] | [PDF] [CODE] |
FedGS: Federated Graph-based Sampling with Arbitrary Client Availability | XMU | AAAI | 2023 | FedGS[^FedGS] | [PDF] [CODE] |
Incentive-boosted Federated Crowdsourcing | SDU | AAAI | 2023 | iFedCrowd[^iFedCrowd] | [PDF] |
Towards Understanding Biased Client Selection in Federated Learning. | CMU | AISTATS | 2022 | [PUB] [CODE] | |
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning | KAUST | AISTATS | 2022 | FLIX[^FLIX] | [PUB] [PDF] [CODE] |
Sharp Bounds for Federated Averaging (Local SGD) and Continuous Perspective. | Stanford | AISTATS | 2022 | [PUB] [PDF] [CODE] | |
Federated Reinforcement Learning with Environment Heterogeneity. | PKU | AISTATS | 2022 | [PUB] [PDF] [CODE] | |
Federated Myopic Community Detection with One-shot Communication | Purdue | AISTATS | 2022 | [PUB] [PDF] | |
Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits. | University of Virginia | AISTATS | 2022 | [PUB] [PDF] [CODE] | |
Towards Federated Bayesian Network Structure Learning with Continuous Optimization. | CMU | AISTATS | 2022 | [PUB] [PDF] [CODE] | |
Federated Learning with Buffered Asynchronous Aggregation | Meta AI | AISTATS | 2022 | [PUB] [PDF] [VIDEO] | |
Differentially Private Federated Learning on Heterogeneous Data. | Stanford | AISTATS | 2022 | DP-SCAFFOLD[^DP-SCAFFOLD] | [PUB] [PDF] [CODE] |
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification | Princeton | AISTATS | 2022 | SparseFed[^SparseFed] | [PUB] [PDF] [CODE] [VIDEO] |
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning | KAUST | AISTATS | 2022 | [PUB] [PDF] | |
Federated Functional Gradient Boosting. | University of Pennsylvania | AISTATS | 2022 | [PUB] [PDF] [CODE] | |
QLSD: Quantised Langevin Stochastic Dynamics for Bayesian Federated Learning. | Criteo AI Lab | AISTATS | 2022 | QLSD[^QLSD] | [PUB] [PDF] [CODE] [VIDEO] |
Meta-Learning Based Knowledge Extrapolation for Knowledge Graphs in the Federated Setting kg.
|
ZJU | IJCAI | 2022 | MaKEr[^MaKEr] | [PUB] [PDF] [CODE] |
Personalized Federated Learning With a Graph | UTS | IJCAI | 2022 | SFL[^SFL] | [PUB] [PDF] [CODE] |
Vertically Federated Graph Neural Network for Privacy-Preserving Node Classification | ZJU | IJCAI | 2022 | VFGNN[^VFGNN] | [PUB] [PDF] |
Adapt to Adaptation: Learning Personalization for Cross-Silo Federated Learning | IJCAI | 2022 | [PUB] [PDF] [CODE] | ||
Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning | IJCAI | 2022 | Fed-ET[^Fed-ET] | [PUB] [PDF] | |
Private Semi-Supervised Federated Learning. | IJCAI | 2022 | [PUB] | ||
Continual Federated Learning Based on Knowledge Distillation. | IJCAI | 2022 | [PUB] | ||
Federated Learning on Heterogeneous and Long-Tailed Data via Classifier Re-Training with Federated Features | IJCAI | 2022 | CReFF[^CReFF] | [PUB] [PDF] [CODE] | |
Federated Multi-Task Attention for Cross-Individual Human Activity Recognition | IJCAI | 2022 | [PUB] | ||
Personalized Federated Learning with Contextualized Generalization. | IJCAI | 2022 | [PUB] [PDF] | ||
Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection. | IJCAI | 2022 | [PUB] [PDF] | ||
FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining Competitive Performance in Federated Learning | IJCAI | 2022 | FedCG[^FedCG] | [PUB] [PDF] [CODE] | |
FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server. | IJCAI | 2022 | FedDUAP[^FedDUAP] | [PUB] [PDF] | |
Towards Verifiable Federated Learning surv.
|
IJCAI | 2022 | [PUB] [PDF] | ||
HarmoFL: Harmonizing Local and Global Drifts in Federated Learning on Heterogeneous Medical Images | CUHK; BUAA | AAAI | 2022 | [PUB] [PDF] [CODE] [] | |
Federated Learning for Face Recognition with Gradient Correction | BUPT | AAAI | 2022 | [PUB] [PDF] | |
SpreadGNN: Decentralized Multi-Task Federated Learning for Graph Neural Networks on Molecular Data | USC | AAAI | 2022 | SpreadGNN[^SpreadGNN] | [PUB] [PDF] [CODE] [] |
SmartIdx: Reducing Communication Cost in Federated Learning by Exploiting the CNNs Structures | HIT; PCL | AAAI | 2022 | SmartIdx[^SmartIdx] | [PUB] [CODE] |
Bridging between Cognitive Processing Signals and Linguistic Features via a Unified Attentional Network | TJU | AAAI | 2022 | [PUB] [PDF] | |
Seizing Critical Learning Periods in Federated Learning | SUNY-Binghamton University | AAAI | 2022 | FedFIM[^FedFIM] | [PUB] [PDF] |
Coordinating Momenta for Cross-silo Federated Learning | University of Pittsburgh | AAAI | 2022 | [PUB] [PDF] | |
FedProto: Federated Prototype Learning over Heterogeneous Devices | UTS | AAAI | 2022 | FedProto[^FedProto] | [PUB] [PDF] [CODE] |
FedSoft: Soft Clustered Federated Learning with Proximal Local Updating | CMU | AAAI | 2022 | FedSoft[^FedSoft] | [PUB] [PDF] [CODE] |
Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better | The University of Texas at Austin | AAAI | 2022 | [PUB] [PDF] [CODE] | |
FedFR: Joint Optimization Federated Framework for Generic and Personalized Face Recognition | National Taiwan University | AAAI | 2022 | FedFR[^FedFR] | [PUB] [PDF] [CODE] |
SplitFed: When Federated Learning Meets Split Learning | CSIRO | AAAI | 2022 | SplitFed[^SplitFed] | [PUB] [PDF] [CODE] |
Efficient Device Scheduling with Multi-Job Federated Learning | Soochow University | AAAI | 2022 | [PUB] [PDF] | |
Implicit Gradient Alignment in Distributed and Federated Learning | IIT Kanpur | AAAI | 2022 | [PUB] [PDF] | |
Federated Nearest Neighbor Classification with a Colony of Fruit-Flies | IBM Research | AAAI | 2022 | FlyNNFL[^FlyNNFL] | [PUB] [PDF] [CODE] |
Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization | IJCAI | 2021 | [PUB] [PDF] [VIDEO] | ||
Behavior Mimics Distribution: Combining Individual and Group Behaviors for Federated Learning | IJCAI | 2021 | [PUB] [PDF] | ||
FedSpeech: Federated Text-to-Speech with Continual Learning | IJCAI | 2021 | FedSpeech[^FedSpeech] | [PUB] [PDF] | |
Practical One-Shot Federated Learning for Cross-Silo Setting | IJCAI | 2021 | FedKT[^FedKT] | [PUB] [PDF] [CODE] | |
Federated Model Distillation with Noise-Free Differential Privacy | IJCAI | 2021 | FEDMD-NFDP[^FEDMD-NFDP] | [PUB] [PDF] [VIDEO] | |
LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy | IJCAI | 2021 | LDP-FL[^LDP-FL] | [PUB] [PDF] | |
Federated Learning with Fair Averaging. 🔥 | IJCAI | 2021 | FedFV[^FedFV] | [PUB] [PDF] [CODE] | |
H-FL: A Hierarchical Communication-Efficient and Privacy-Protected Architecture for Federated Learning. | IJCAI | 2021 | H-FL[^H-FL] | [PUB] [PDF] | |
Communication-efficient and Scalable Decentralized Federated Edge Learning. | IJCAI | 2021 | [PUB] | ||
Secure Bilevel Asynchronous Vertical Federated Learning with Backward Updating | Xidian University; JD Tech | AAAI | 2021 | [PUB] [PDF] [VIDEO] | |
FedRec++: Lossless Federated Recommendation with Explicit Feedback | SZU | AAAI | 2021 | FedRec++[^FedRecplusplus] | [PUB] [VIDEO] |
Federated Multi-Armed Bandits | University of Virginia | AAAI | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
On the Convergence of Communication-Efficient Local SGD for Federated Learning | Temple University; University of Pittsburgh | AAAI | 2021 | [PUB] [VIDEO] | |
FLAME: Differentially Private Federated Learning in the Shuffle Model | Renmin University of China; Kyoto University | AAAI | 2021 | FLAME_D[^FLAME_D] | [PUB] [PDF] [VIDEO] [CODE] |
Toward Understanding the Influence of Individual Clients in Federated Learning | SJTU; The University of Texas at Dallas | AAAI | 2021 | [PUB] [PDF] [VIDEO] | |
Provably Secure Federated Learning against Malicious Clients | Duke University | AAAI | 2021 | [PUB] [PDF] [VIDEO] [SLIDE] | |
Personalized Cross-Silo Federated Learning on Non-IID Data | Simon Fraser University; McMaster University | AAAI | 2021 | FedAMP[^FedAMP] | [PUB] [PDF] [VIDEO] [UC.] |
Model-Sharing Games: Analyzing Federated Learning under Voluntary Participation | Cornell University | AAAI | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning | University of Nevada; IBM Research | AAAI | 2021 | [PUB] [PDF] [VIDEO] | |
Game of Gradients: Mitigating Irrelevant Clients in Federated Learning | IIT Bombay; IBM Research | AAAI | 2021 | [PUB] [PDF] [CODE] [VIDEO] [SUPPLEMENTARY] | |
Federated Block Coordinate Descent Scheme for Learning Global and Personalized Models | CUHK; Arizona State University | AAAI | 2021 | [PUB] [PDF] [VIDEO] [CODE] | |
Addressing Class Imbalance in Federated Learning | Northwestern University | AAAI | 2021 | [PUB] [PDF] [VIDEO] [CODE] [] | |
Defending against Backdoors in Federated Learning with Robust Learning Rate | The University of Texas at Dallas | AAAI | 2021 | [PUB] [PDF] [VIDEO] [CODE] | |
Free-rider Attacks on Model Aggregation in Federated Learning | Accenture Labs | AISTATS | 2021 | [PUB] [PDF] [CODE] [VIDEO] [SUPPLEMENTARY] | |
Federated f-differential privacy | University of Pennsylvania | AISTATS | 2021 | [PUB] [CODE] [VIDEO] [SUPPLEMENTARY] | |
Federated learning with compression: Unified analysis and sharp guarantees 🔥 | The Pennsylvania State University; The University of Texas at Austin | AISTATS | 2021 | [PUB] [PDF] [CODE] [VIDEO] [SUPPLEMENTARY] | |
Shuffled Model of Differential Privacy in Federated Learning | UCLA; Google | AISTATS | 2021 | [PUB] [VIDEO] [SUPPLEMENTARY] | |
Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning | AISTATS | 2021 | [PUB] [PDF] [VIDEO] [SUPPLEMENTARY] | ||
Federated Multi-armed Bandits with Personalization | University of Virginia; The Pennsylvania State University | AISTATS | 2021 | [PUB] [PDF] [CODE] [VIDEO] [SUPPLEMENTARY] | |
Towards Flexible Device Participation in Federated Learning | CMU; SYSU | AISTATS | 2021 | [PUB] [PDF] [VIDEO] [SUPPLEMENTARY] | |
Federated Meta-Learning for Fraudulent Credit Card Detection | IJCAI | 2020 | [PUB] [VIDEO] | ||
A Multi-player Game for Studying Federated Learning Incentive Schemes | IJCAI | 2020 | FedGame[^FedGame] | [PUB] [CODE] [] | |
Practical Federated Gradient Boosting Decision Trees | NUS; UWA | AAAI | 2020 | SimFL[^SimFL] | [PUB] [PDF] [CODE] |
Federated Learning for Vision-and-Language Grounding Problems | PKU; Tencent | AAAI | 2020 | [PUB] | |
Federated Latent Dirichlet Allocation: A Local Differential Privacy Based Framework | BUAA | AAAI | 2020 | [PUB] | |
Federated Patient Hashing | Cornell University | AAAI | 2020 | [PUB] | |
Robust Federated Learning via Collaborative Machine Teaching | Symantec Research Labs; KAUST | AAAI | 2020 | [PUB] [PDF] | |
FedVision: An Online Visual Object Detection Platform Powered by Federated Learning | WeBank | AAAI | 2020 | [PUB] [PDF] [CODE] | |
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization | UC Santa Barbara; UT Austin | AISTATS | 2020 | [PUB] [PDF] [VIDEO] [SUPPLEMENTARY] | |
How To Backdoor Federated Learning 🔥 | Cornell Tech | AISTATS | 2020 | [PUB] [PDF] [VIDEO] [CODE] [SUPPLEMENTARY] | |
Federated Heavy Hitters Discovery with Differential Privacy | RPI; Google | AISTATS | 2020 | [PUB] [PDF] [VIDEO] [SUPPLEMENTARY] | |
Multi-Agent Visualization for Explaining Federated Learning | WeBank | IJCAI | 2019 | [PUB] [VIDEO] |
In this section, we will summarize Federated Learning papers accepted by top ML(machine learning) conference and journal, Including NeurIPS(Annual Conference on Neural Information Processing Systems), ICML(International Conference on Machine Learning), ICLR(International Conference on Learning Representations), COLT(Annual Conference Computational Learning Theory) , UAI(Conference on Uncertainty in Artificial Intelligence), JMLR(Journal of Machine Learning Research), TPAMI(IEEE Transactions on Pattern Analysis and Machine Intelligence).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
FedIPR: Ownership Verification for Federated Deep Neural Network Models | SJTU | TPAMI | 2023 | [PUB] [PDF] [CODE] [] | |
Decentralized Federated Averaging | NUDT | TPAMI | 2023 | [PUB] [PDF] | |
Personalized Federated Learning with Feature Alignment and Classifier Collaboration | THU | ICLR | 2023 | [PUB] [CODE] | |
MocoSFL: enabling cross-client collaborative self-supervised learning | ASU | ICLR | 2023 | [PUB] [CODE] | |
Single-shot General Hyper-parameter Optimization for Federated Learning | IBM | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Where to Begin? Exploring the Impact of Pre-Training and Initialization in Federated | ICLR | 2023 | [PUB] [PDF] [CODE] | ||
FedExP: Speeding up Federated Averaging via Extrapolation | CMU | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection | MSU | ICLR | 2023 | [PUB] [CODE] | |
DASHA: Distributed Nonconvex Optimization with Communication Compression and Optimal Oracle Complexity | KAUST | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Machine Unlearning of Federated Clusters | University of Illinois | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Federated Neural Bandits | NUS | ICLR | 2023 | [PUB] [PDF] [CODE] | |
FedFA: Federated Feature Augmentation | ETH Zurich | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach | CMU | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Better Generative Replay for Continual Federated Learning | University of Virginia | ICLR | 2023 | [PUB] [CODE] | |
Federated Learning from Small Datasets | IKIM | ICLR | 2023 | [PUB] [PDF] | |
Federated Nearest Neighbor Machine Translation | USTC | ICLR | 2023 | [PUB] [PDF] | |
Meta Knowledge Condensation for Federated Learning | A*STAR | ICLR | 2023 | [PUB] [PDF] | |
Test-Time Robust Personalization for Federated Learning | EPFL | ICLR | 2023 | [PUB] [PDF] [CODE] | |
DepthFL : Depthwise Federated Learning for Heterogeneous Clients | SNU | ICLR | 2023 | [PUB] | |
Towards Addressing Label Skews in One-Shot Federated Learning | NUS | ICLR | 2023 | [PUB] [CODE] | |
Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning | NUS | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Panning for Gold in Federated Learning: Targeted Text Extraction under Arbitrarily Large-Scale Aggregation | UMD | ICLR | 2023 | [PUB] [CODE] | |
SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication | UMD | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses | USC | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Effective passive membership inference attacks in federated learning against overparameterized models | Purdue University | ICLR | 2023 | [PUB] | |
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification | University of Cambridge | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Multimodal Federated Learning via Contrastive Representation Ensemble | THU | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Faster federated optimization under second-order similarity | Princeton University | ICLR | 2023 | [PUB] [PDF] [CODE] | |
FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy | University of Sydney | ICLR | 2023 | [PUB] [CODE] | |
The Best of Both Worlds: Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation | utexas | ICLR | 2023 | [PUB] [PDF] [CODE] | |
PerFedMask: Personalized Federated Learning with Optimized Masking Vectors | UBC | ICLR | 2023 | [PUB] [CODE] | |
EPISODE: Episodic Gradient Clipping with Periodic Resampled Corrections for Federated Learning with Heterogeneous Data | GMU | ICLR | 2023 | [PUB] [CODE] | |
FedDAR: Federated Domain-Aware Representation Learning | Harvard | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning | upenn | ICLR | 2023 | [PUB] [CODE] | |
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning | Purdue University | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Generalization Bounds for Federated Learning: Fast Rates, Unparticipating Clients and Unbounded Losses | RUC | ICLR | 2023 | [PUB] | |
Efficient Federated Domain Translation | Purdue University | ICLR | 2023 | [PUB] [CODE] | |
On the Importance and Applicability of Pre-Training for Federated Learning | OSU | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models | UMD | ICLR | 2023 | [PUB] [PDF] [CODE] | |
A Statistical Framework for Personalized Federated Learning and Estimation: Theory, Algorithms, and Privacy | UCLA | ICLR | 2023 | [PUB] [PDF] | |
Instance-wise Batch Label Restoration via Gradients in Federated Learning | BUAA | ICLR | 2023 | [PUB] [CODE] | |
Data-Free One-Shot Federated Learning Under Very High Statistical Heterogeneity | College of William and Mary | ICLR | 2023 | [PUB] | |
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning | University of Warwick | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Sparse Random Networks for Communication-Efficient Federated Learning | Stanford | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Combating Exacerbated Heterogeneity for Robust Decentralized Models | HKBU | ICLR | 2023 | [PUB] [CODE] | |
Hyperparameter Optimization through Neural Network Partitioning | University of Cambridge | ICLR | 2023 | [PUB] [PDF] | |
Does Decentralized Learning with Non-IID Unlabeled Data Benefit from Self Supervision? | MIT | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top | mbzuai | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Dual Diffusion Implicit Bridges for Image-to-Image Translation | Stanford | ICLR | 2023 | [PUB] [PDF] [CODE] | |
Federated online clustering of bandits. | CUHK | UAI | 2022 | [PUB] [PDF] [CODE] | |
Privacy-aware compression for federated data analysis. | Meta AI | UAI | 2022 | [PUB] [PDF] [CODE] | |
Faster non-convex federated learning via global and local momentum. | UTEXAS | UAI | 2022 | [PUB] [PDF] | |
Fedvarp: Tackling the variance due to partial client participation in federated learning. | CMU | UAI | 2022 | [PUB] [PDF] | |
SASH: Efficient secure aggregation based on SHPRG for federated learning | CAS; CASTEST | UAI | 2022 | [PUB] [PDF] | |
Bayesian federated estimation of causal effects from observational data | NUS | UAI | 2022 | [PUB] [PDF] | |
Communication-Efficient Randomized Algorithm for Multi-Kernel Online Federated Learning | Hanyang University | TPAMI | 2022 | [PUB] | |
Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning | ZJU | TPAMI | 2022 | TPAMI-LAQ[^TPAMI-LAQ] | [PUB] [CODE] |
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with an Inexact Prox | Moscow Institute of Physics and Technology | NeurIPS | 2022 | [PUB] [PDF] | |
LAMP: Extracting Text from Gradients with Language Model Priors | ETHZ | NeurIPS | 2022 | [PUB] [CODE] | |
FedAvg with Fine Tuning: Local Updates Lead to Representation Learning | utexas | NeurIPS | 2022 | [PUB] [PDF] | |
On Convergence of FedProx: Local Dissimilarity Invariant Bounds, Non-smoothness and Beyond | NUIST | NeurIPS | 2022 | [PUB] [PDF] | |
Improved Differential Privacy for SGD via Optimal Private Linear Operators on Adaptive Streams | WISC | NeurIPS | 2022 | [PUB] [CODE] | |
Decentralized Gossip-Based Stochastic Bilevel Optimization over Communication Networks | Columbia University | NeurIPS | 2022 | [PUB] [PDF] | |
Asymptotic Behaviors of Projected Stochastic Approximation: A Jump Diffusion Perspective | PKU | NeurIPS | 2022 | [PUB] | |
Subspace Recovery from Heterogeneous Data with Non-isotropic Noise | Stanford | NeurIPS | 2022 | [PUB] [PDF] | |
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization | KAUST | NeurIPS | 2022 | [PUB] [PDF] | |
On-Demand Sampling: Learning Optimally from Multiple Distributions | UC Berkeley | NeurIPS | 2022 | [PUB] [CODE] | |
Improved Utility Analysis of Private CountSketch | ITU | NeurIPS | 2022 | [PUB] [PDF] [CODE] | |
Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning | HUAWEI | NeurIPS | 2022 | [PUB] [CODE] | |
Decentralized Local Stochastic Extra-Gradient for Variational Inequalities | phystech | NeurIPS | 2022 | [PUB] [PDF] | |
BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression | Princeton | NeurIPS | 2022 | [PUB] [PDF] [CODE] | |
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning | The University of Tokyo | NeurIPS | 2022 | [PUB] [PDF] | |
Near-Optimal Collaborative Learning in Bandits | INRIA; Inserm | NeurIPS | 2022 | [PUB] [PDF] [CODE] | |
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees | phystech | NeurIPS | 2022 | [PUB] [PDF] | |
Towards Optimal Communication Complexity in Distributed Non-Convex Optimization | TTIC | NeurIPS | 2022 | [PUB] [CODE] | |
FedPop: A Bayesian Approach for Personalised Federated Learning | Skoltech | NeurIPS | 2022 | FedPop[^FedPop] | [PUB] [PDF] |
Fairness in Federated Learning via Core-Stability | UIUC | NeurIPS | 2022 | CoreFed[^CoreFed] | [PUB] [CODE] |
SecureFedYJ: a safe feature Gaussianization protocol for Federated Learning | Sorbonne Universit | NeurIPS | 2022 | SecureFedYJ[^SecureFedYJ] | [PUB] [PDF] |
FedRolex: Model-Heterogeneous Federated Learning with Rolling Submodel Extraction | MSU | NeurIPS | 2022 | FedRolex[^FedRolex] | [PUB] [CODE] |
On Sample Optimality in Personalized Collaborative and Federated Learning | INRIA | NeurIPS | 2022 | [PUB] | |
DReS-FL: Dropout-Resilient Secure Federated Learning for Non-IID Clients via Secret Data Sharing | HKUST | NeurIPS | 2022 | DReS-FL[^DReS-FL] | [PUB] [PDF] |
FairVFL: A Fair Vertical Federated Learning Framework with Contrastive Adversarial Learning | THU | NeurIPS | 2022 | FairVFL[^FairVFL] | [PUB] |
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning | KAUST | NeurIPS | 2022 | VR-ProxSkip[^VR-ProxSkip] | [PUB] [PDF] |
VF-PS: How to Select Important Participants in Vertical Federated Learning, Efficiently and Securely? | WHU | NeurIPS | 2022 | VF-PS[^VF-PS] | [PUB] [CODE] |
DENSE: Data-Free One-Shot Federated Learning | ZJU | NeurIPS | 2022 | DENSE[^DENSE] | [PUB] [PDF] |
CalFAT: Calibrated Federated Adversarial Training with Label Skewness | ZJU | NeurIPS | 2022 | CalFAT[^CalFAT] | [PUB] [PDF] |
SAGDA: Achieving O(2) Communication Complexity in Federated Min-Max Learning | OSU | NeurIPS | 2022 | SAGDA[^SAGDA] | [PUB] [PDF] |
Taming Fat-Tailed (Heavier-Tailed with Potentially Infinite Variance) Noise in Federated Learning | OSU | NeurIPS | 2022 | FAT-Clipping[^FAT-Clipping] | [PUB] [PDF] |
Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness | PKU | NeurIPS | 2022 | [PUB] | |
Federated Submodel Optimization for Hot and Cold Data Features | SJTU | NeurIPS | 2022 | FedSubAvg[^FedSubAvg] | [PUB] |
BooNTK: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels | UC Berkeley | NeurIPS | 2022 | BooNTK[^BooNTK] | [PUB] [PDF] |
Byzantine-tolerant federated Gaussian process regression for streaming data | PSU | NeurIPS | 2022 | [PUB] [CODE] | |
SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression | CMU | NeurIPS | 2022 | SoteriaFL[^SoteriaFL] | [PUB] [PDF] |
Coresets for Vertical Federated Learning: Regularized Linear Regression and K-Means Clustering | Yale | NeurIPS | 2022 | [PUB] [PDF] [CODE] | |
Communication Efficient Federated Learning for Generalized Linear Bandits | University of Virginia | NeurIPS | 2022 | [PUB] [CODE] | |
Recovering Private Text in Federated Learning of Language Models | Princeton | NeurIPS | 2022 | FILM[^FILM] | [PUB] [PDF] [CODE] |
Federated Learning from Pre-Trained Models: A Contrastive Learning Approach | UTS | NeurIPS | 2022 | FedPCL[^FedPCL] | [PUB] [PDF] |
Global Convergence of Federated Learning for Mixed Regression | Northeastern University | NeurIPS | 2022 | [PUB] [PDF] | |
Resource-Adaptive Federated Learning with All-In-One Neural Composition | JHU | NeurIPS | 2022 | FLANC[^FLANC] | [PUB] |
Self-Aware Personalized Federated Learning | Amazon | NeurIPS | 2022 | Self-FL[^Self-FL] | [PUB] [PDF] |
A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning | Northeastern University | NeurIPS | 2022 | FedGDA-GT[^FedGDA-GT] | [PUB] [PDF] |
An Adaptive Kernel Approach to Federated Learning of Heterogeneous Causal Effects | NUS | NeurIPS | 2022 | [PUB] | |
Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning | EPFL | NeurIPS | 2022 | [PUB] [PDF] | |
Personalized Online Federated Multi-Kernel Learning | UCI | NeurIPS | 2022 | [PUB] | |
SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training | Duke University | NeurIPS | 2022 | SemiFL[^SemiFL] | [PUB] [PDF] [CODE] |
A Unified Analysis of Federated Learning with Arbitrary Client Participation | IBM | NeurIPS | 2022 | [PUB] [PDF] | |
Preservation of the Global Knowledge by Not-True Distillation in Federated Learning | KAIST | NeurIPS | 2022 | FedNTD[^FedNTD] | [PUB] [PDF] [CODE] |
FedSR: A Simple and Effective Domain Generalization Method for Federated Learning | University of Oxford | NeurIPS | 2022 | FedSR[^FedSR] | [PUB] [CODE] |
Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching | KAIST | NeurIPS | 2022 | Factorized-FL[^Factorized-FL] | [PUB] [PDF] [CODE] |
A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits | UC | NeurIPS | 2022 | FedLinUCB[^FedLinUCB] | [PUB] [PDF] |
Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework | Tulane University | NeurIPS | 2022 | [PUB] | |
On Privacy and Personalization in Cross-Silo Federated Learning | CMU | NeurIPS | 2022 | [PUB] [PDF] | |
A Coupled Design of Exploiting Record Similarity for Practical Vertical Federated Learning | NUS | NeurIPS | 2022 | FedSim[^FedSim] | [PUB] [PDF] [CODE] |
FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings | Owkin | NeurIPS Datasets and Benchmarks | 2022 | [PUB] [CODE] | |
A Tree-based Model Averaging Approach for Personalized Treatment Effect Estimation from Heterogeneous Data Sources | University of Pittsburgh | ICML | 2022 | [PUB] [PDF] [CODE] | |
Fast Composite Optimization and Statistical Recovery in Federated Learning | SJTU | ICML | 2022 | [PUB] [PDF] [CODE] | |
Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning | NYU | ICML | 2022 | PPSGD[^PPSGD] | [PUB] [PDF] [CODE] |
The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning 🔥 | Stanford; Google Research | ICML | 2022 | [PUB] [PDF] [CODE] [SLIDE] | |
The Poisson Binomial Mechanism for Unbiased Federated Learning with Secure Aggregation | Stanford; Google Research | ICML | 2022 | PBM[^PBM] | [PUB] [PDF] [CODE] |
DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training | USTC | ICML | 2022 | DisPFL[^DisPFL] | [PUB] [PDF] [CODE] |
FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning | University of Oulu | ICML | 2022 | FedNew[^FedNew] | [PUB] [PDF] [CODE] |
DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning | University of Cambridge | ICML | 2022 | DAdaQuant[^DAdaQuant] | [PUB] [PDF] [SLIDE] [CODE] |
Accelerated Federated Learning with Decoupled Adaptive Optimization | Auburn University | ICML | 2022 | [PUB] [PDF] | |
Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling | Georgia Tech | ICML | 2022 | [PUB] [PDF] | |
Multi-Level Branched Regularization for Federated Learning | Seoul National University | ICML | 2022 | FedMLB[^FedMLB] | [PUB] [PDF] [CODE] [PAGE] |
FedScale: Benchmarking Model and System Performance of Federated Learning at Scale 🔥 | University of Michigan | ICML | 2022 | FedScale[^FedScale] | [PUB] [PDF] [CODE] |
Federated Learning with Positive and Unlabeled Data | XJTU | ICML | 2022 | FedPU[^FedPU] | [PUB] [PDF] [CODE] |
Deep Neural Network Fusion via Graph Matching with Applications to Model Ensemble and Federated Learning | SJTU | ICML | 2022 | [PUB] [CODE] | |
Orchestra: Unsupervised Federated Learning via Globally Consistent Clustering | University of Michigan | ICML | 2022 | Orchestra[^Orchestra] | [PUB] [PDF] [CODE] |
Disentangled Federated Learning for Tackling Attributes Skew via Invariant Aggregation and Diversity Transferring | USTC | ICML | 2022 | DFL[^DFL] | [PUB] [PDF] [CODE] [SLIDE] [] |
Architecture Agnostic Federated Learning for Neural Networks | The University of Texas at Austin | ICML | 2022 | FedHeNN[^FedHeNN] | [PUB] [PDF] [SLIDE] |
Personalized Federated Learning through Local Memorization | Inria | ICML | 2022 | KNN-PER[^KNN-PER] | [PUB] [PDF] [CODE] |
Proximal and Federated Random Reshuffling | KAUST | ICML | 2022 | ProxRR[^ProxRR] | [PUB] [PDF] [CODE] |
Federated Learning with Partial Model Personalization | University of Washington | ICML | 2022 | [PUB] [PDF] [CODE] | |
Generalized Federated Learning via Sharpness Aware Minimization | University of South Florida | ICML | 2022 | [PUB] [PDF] | |
FedNL: Making Newton-Type Methods Applicable to Federated Learning | KAUST | ICML | 2022 | FedNL[^FedNL] | [PUB] [PDF] [VIDEO] [SLIDE] |
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms | CMU | ICML | 2022 | [PUB] [PDF] [SLIDE] | |
Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning | Hong Kong Baptist University | ICML | 2022 | VFL[^VFL] | [PUB] [PDF] [CODE] [] |
FedNest: Federated Bilevel, Minimax, and Compositional Optimization | University of Michigan | ICML | 2022 | FedNest[^FedNest] | [PUB] [PDF] [CODE] |
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning | VMware Research | ICML | 2022 | EDEN[^EDEN] | [PUB] [PDF] [CODE] |
Communication-Efficient Adaptive Federated Learning | Pennsylvania State University | ICML | 2022 | [PUB] [PDF] | |
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training | CISPA Helmholz Center for Information Security | ICML | 2022 | ProgFed[^ProgFed] | [PUB] [PDF] [SLIDE] [CODE] |
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification 🔥 | University of Maryland | ICML | 2022 | breaching[^breaching] | [PUB] [PDF] [CODE] |
Anarchic Federated Learning | The Ohio State University | ICML | 2022 | [PUB] [PDF] | |
QSFL: A Two-Level Uplink Communication Optimization Framework for Federated Learning | Nankai University | ICML | 2022 | QSFL[^QSFL] | [PUB] [CODE] |
Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization | KAIST | ICML | 2022 | [PUB] [PDF] | |
Neural Tangent Kernel Empowered Federated Learning | NC State University | ICML | 2022 | [PUB] [PDF] [CODE] | |
Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy | UMN | ICML | 2022 | [PUB] [PDF] | |
Personalized Federated Learning via Variational Bayesian Inference | CAS | ICML | 2022 | [PUB] [PDF] [SLIDE] [UC.] | |
Federated Learning with Label Distribution Skew via Logits Calibration | ZJU | ICML | 2022 | [PUB] | |
Neurotoxin: Durable Backdoors in Federated Learning | Southeast University;Princeton | ICML | 2022 | Neurotoxin[^Neurotoxin] | [PUB] [PDF] [CODE] |
Resilient and Communication Efficient Learning for Heterogeneous Federated Systems | Michigan State University | ICML | 2022 | [PUB] | |
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond | KAIST | ICLR (oral) | 2022 | [PUB] [CODE] | |
Bayesian Framework for Gradient Leakage | ETH Zurich | ICLR | 2022 | [PUB] [PDF] [CODE] | |
Federated Learning from only unlabeled data with class-conditional-sharing clients | The University of Tokyo; CUHK | ICLR | 2022 | FedUL[^FedUL] | [PUB] [CODE] |
FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning | CMU; University of Illinois at Urbana-Champaign; University of Washington | ICLR | 2022 | FedChain[^FedChain] | [PUB] [PDF] |
Acceleration of Federated Learning with Alleviated Forgetting in Local Training | THU | ICLR | 2022 | FedReg[^FedReg] | [PUB] [PDF] [CODE] |
FedPara: Low-rank Hadamard Product for Communicatkion-Efficient Federated Learning | POSTECH | ICLR | 2022 | [PUB] [PDF] [CODE] | |
An Agnostic Approach to Federated Learning with Class Imbalance | University of Pennsylvania | ICLR | 2022 | [PUB] [CODE] | |
Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization | Michigan State University; The University of Texas at Austin | ICLR | 2022 | [PUB] [PDF] [CODE] | |
Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models 🔥 | University of Maryland; NYU | ICLR | 2022 | [PUB] [PDF] [CODE] | |
ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity | University of Cambridge; University of Oxford | ICLR | 2022 | [PUB] [PDF] | |
Diverse Client Selection for Federated Learning via Submodular Maximization | Intel; CMU | ICLR | 2022 | [PUB] [CODE] | |
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank? | Purdue | ICLR | 2022 | [PUB] [PDF] [CODE] | |
Diurnal or Nocturnal? Federated Learning of Multi-branch Networks from Periodically Shifting Distributions 🔥 | University of Maryland; Google | ICLR | 2022 | [PUB] [CODE] | |
Towards Model Agnostic Federated Learning Using Knowledge Distillation | EPFL | ICLR | 2022 | [PUB] [PDF] [CODE] | |
Divergence-aware Federated Self-Supervised Learning | NTU; SenseTime | ICLR | 2022 | [PUB] [PDF] [CODE] | |
What Do We Mean by Generalization in Federated Learning? 🔥 | Stanford; Google | ICLR | 2022 | [PUB] [PDF] [CODE] | |
FedBABU: Toward Enhanced Representation for Federated Image Classification | KAIST | ICLR | 2022 | [PUB] [PDF] [CODE] | |
Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing | EPFL | ICLR | 2022 | [PUB] [PDF] [CODE] | |
Improving Federated Learning Face Recognition via Privacy-Agnostic Clusters | Aibee | ICLR Spotlight | 2022 | [PUB] [PDF] [PAGE] [] | |
Hybrid Local SGD for Federated Learning with Heterogeneous Communications | University of Texas; Pennsylvania State University | ICLR | 2022 | [PUB] | |
On Bridging Generic and Personalized Federated Learning for Image Classification | The Ohio State University | ICLR | 2022 | Fed-RoD[^Fed-RoD] | [PUB] [PDF] [CODE] |
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond | KAIST; MIT | ICLR | 2022 | [PUB] [PDF] | |
One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them. | JMLR | 2021 | [PUB] [CODE] | ||
Constrained differentially private federated learning for low-bandwidth devices | UAI | 2021 | [PUB] [PDF] | ||
Federated stochastic gradient Langevin dynamics | UAI | 2021 | [PUB] [PDF] | ||
Federated Learning Based on Dynamic Regularization | BU; ARM | ICLR | 2021 | [PUB] [PDF] [CODE] | |
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning | The Ohio State University | ICLR | 2021 | [PUB] [PDF] | |
HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients | Duke University | ICLR | 2021 | HeteroFL[^HeteroFL] | [PUB] [PDF] [CODE] |
FedMix: Approximation of Mixup under Mean Augmented Federated Learning | KAIST | ICLR | 2021 | FedMix[^FedMix] | [PUB] [PDF] |
Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms 🔥 | CMU; Google | ICLR | 2021 | [PUB] [PDF] [CODE] | |
Adaptive Federated Optimization 🔥 | ICLR | 2021 | [PUB] [PDF] [CODE] | ||
Personalized Federated Learning with First Order Model Optimization | Stanford; NVIDIA | ICLR | 2021 | FedFomo[^FedFomo] | [PUB] [PDF] [CODE] [UC.] |
FedBN: Federated Learning on Non-IID Features via Local Batch Normalization 🔥 | Princeton | ICLR | 2021 | FedBN[^FedBN] | [PUB] [PDF] [CODE] |
FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning | The Ohio State University | ICLR | 2021 | FedBE[^FedBE] | [PUB] [PDF] [CODE] |
Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning | KAIST | ICLR | 2021 | [PUB] [PDF] [CODE] | |
KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation | ZJU | ICML | 2021 | [PUB] [PDF] [CODE] [] | |
Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix | Harvard University | ICML | 2021 | [PUB] [PDF] [VIDEO] [CODE] | |
FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis | PKU; Princeton | ICML | 2021 | FL-NTK[^FL-NTK] | [PUB] [PDF] [VIDEO] |
Personalized Federated Learning using Hypernetworks 🔥 | Bar-Ilan University; NVIDIA | ICML | 2021 | [PUB] [PDF] [CODE] [PAGE] [VIDEO] [] | |
Federated Composite Optimization | Stanford; Google | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] [SLIDE] | |
Exploiting Shared Representations for Personalized Federated Learning | University of Texas at Austin; University of Pennsylvania | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
Data-Free Knowledge Distillation for Heterogeneous Federated Learning 🔥 | Michigan State University | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
Federated Continual Learning with Weighted Inter-client Transfer | KAIST | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity | The University of Iowa | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning | The University of Tokyo | ICML | 2021 | [PUB] [PDF] [VIDEO] | |
Federated Learning of User Verification Models Without Sharing Embeddings | Qualcomm | ICML | 2021 | [PUB] [PDF] [VIDEO] | |
Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated Learning | Accenture | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
Ditto: Fair and Robust Federated Learning Through Personalization | CMU; Facebook AI | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
Heterogeneity for the Win: One-Shot Federated Clustering | CMU | ICML | 2021 | [PUB] [PDF] [VIDEO] | |
The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation 🔥 | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | ||
Debiasing Model Updates for Improving Personalized Federated Training | BU; Arm | ICML | 2021 | [PUB] [CODE] [VIDEO] | |
One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning | Toyota; Berkeley; Cornell University | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks | UIUC; IBM | ICML | 2021 | [PUB] [PDF] [CODE] [VIDEO] | |
Federated Learning under Arbitrary Communication Patterns | Indiana University; Amazon | ICML | 2021 | [PUB] [VIDEO] | |
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression | CMU | NeurIPS | 2021 | [PUB] [PDF] | |
Boosting with Multiple Sources | NeurIPS | 2021 | [PUB] | ||
DRIVE: One-bit Distributed Mean Estimation | VMware | NeurIPS | 2021 | [PUB] [CODE] | |
Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning | NUS | NeurIPS | 2021 | [PUB] [CODE] | |
Gradient Inversion with Generative Image Prior | POSTECH | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
Distributed Machine Learning with Sparse Heterogeneous Data | University of Oxford | NeurIPS | 2021 | [PUB] [PDF] | |
Renyi Differential Privacy of The Subsampled Shuffle Model In Distributed Learning | UCLA | NeurIPS | 2021 | [PUB] [PDF] | |
Sageflow: Robust Federated Learning against Both Stragglers and Adversaries | KAIST | NeurIPS | 2021 | Sageflow[^Sageflow] | [PUB] |
CAFE: Catastrophic Data Leakage in Vertical Federated Learning | Rensselaer Polytechnic Institute; IBM Research | NeurIPS | 2021 | CAFE[^CAFE] | [PUB] [CODE] |
Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee | NUS | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
Optimality and Stability in Federated Learning: A Game-theoretic Approach | Cornell University | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning | UCLA | NeurIPS | 2021 | QuPeD[^QuPeD] | [PUB] [PDF] [CODE] [] |
The Skellam Mechanism for Differentially Private Federated Learning 🔥 | Google Research; CMU | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data | NUS; Huawei | NeurIPS | 2021 | [PUB] [PDF] | |
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning | UMN | NeurIPS | 2021 | [PUB] [PDF] | |
Subgraph Federated Learning with Missing Neighbor Generation | Emory; UBC; Lehigh University | NeurIPS | 2021 | FedSage[^FedSage] | [PUB] [PDF] [CODE] [] |
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning 🔥 | Princeton | NeurIPS | 2021 | GradAttack[^GradAttack] | [PUB] [PDF] [CODE] |
Personalized Federated Learning With Gaussian Processes | Bar-Ilan University | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
Differentially Private Federated Bayesian Optimization with Distributed Exploration | MIT; NUS | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
Parameterized Knowledge Transfer for Personalized Federated Learning | PolyU | NeurIPS | 2021 | KT-pFL[^KT-pFL] | [PUB] [PDF] [CODE] |
Federated Reconstruction: Partially Local Federated Learning 🔥 | Google Research | NeurIPS | 2021 | [PUB] [PDF] [CODE] [UC.] | |
Fast Federated Learning in the Presence of Arbitrary Device Unavailability | THU; Princeton; MIT | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective | Duke University; Accenture Labs | NeurIPS | 2021 | FL-WBC[^FL-WBC] | [PUB] [PDF] [CODE] |
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout | KAUST; Samsung AI Center | NeurIPS | 2021 | FjORD[^FjORD] | [PUB] [PDF] |
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients | University of Pennsylvania | NeurIPS | 2021 | [PUB] [PDF] [VIDEO] | |
Federated Multi-Task Learning under a Mixture of Distributions | INRIA; Accenture Labs | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
Federated Graph Classification over Non-IID Graphs | Emory | NeurIPS | 2021 | GCFL[^GCFL] | [PUB] [PDF] [CODE] [] |
Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing | CMU; Hewlett Packard Enterprise | NeurIPS | 2021 | FedEx[^FedEx] | [PUB] [PDF] [CODE] |
On Large-Cohort Training for Federated Learning 🔥 | Google; CMU | NeurIPS | 2021 | Large-Cohort[^Large-Cohort] | [PUB] [PDF] [CODE] |
DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning | KAUST; Columbia University; University of Central Florida | NeurIPS | 2021 | DeepReduce[^DeepReduce] | [PUB] [PDF] [CODE] |
PartialFed: Cross-Domain Personalized Federated Learning via Partial Initialization | Huawei | NeurIPS | 2021 | PartialFed[^PartialFed] | [PUB] [VIDEO] |
Federated Split Task-Agnostic Vision Transformer for COVID-19 CXR Diagnosis | KAIST | NeurIPS | 2021 | [PUB] [PDF] | |
Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning | THU; Alibaba; Weill Cornell Medicine | NeurIPS | 2021 | FCFL[^FCFL] | [PUB] [PDF] [CODE] |
Federated Linear Contextual Bandits | The Pennsylvania State University; Facebook; University of Virginia | NeurIPS | 2021 | [PUB] [PDF] [CODE] | |
Few-Round Learning for Federated Learning | KAIST | NeurIPS | 2021 | [PUB] | |
Breaking the centralized barrier for cross-device federated learning | EPFL; Google Research | NeurIPS | 2021 | [PUB] [CODE] [VIDEO] | |
Federated-EM with heterogeneity mitigation and variance reduction | Ecole Polytechnique; Google Research | NeurIPS | 2021 | Federated-EM[^Federated-EM] | [PUB] [PDF] |
Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning | MIT; Amazon; Google | NeurIPS | 2021 | [PUB] [PAGE] [SLIDE] | |
FedDR Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization | University of North Carolina at Chapel Hill; IBM Research | NeurIPS | 2021 | FedDR[^FedDR] | [PUB] [PDF] [CODE] |
Federated Adversarial Domain Adaptation | BU; Columbia University; Rutgers University | ICLR | 2020 | [PUB] [PDF] [CODE] | |
DBA: Distributed Backdoor Attacks against Federated Learning | ZJU; IBM Research | ICLR | 2020 | [PUB] [CODE] | |
Fair Resource Allocation in Federated Learning 🔥 | CMU; Facebook AI | ICLR | 2020 | fair-flearn[^fair-flearn] | [PUB] [PDF] [CODE] |
Federated Learning with Matched Averaging 🔥 | University of Wisconsin-Madison; IBM Research | ICLR | 2020 | FedMA[^FedMA] | [PUB] [PDF] [CODE] |
Differentially Private Meta-Learning | CMU | ICLR | 2020 | [PUB] [PDF] | |
Generative Models for Effective ML on Private, Decentralized Datasets 🔥 | ICLR | 2020 | [PUB] [PDF] [CODE] | ||
On the Convergence of FedAvg on Non-IID Data 🔥 | PKU | ICLR | 2020 | [PUB] [PDF] [CODE] [] | |
FedBoost: A Communication-Efficient Algorithm for Federated Learning | ICML | 2020 | FedBoost[^FedBoost] | [PUB] [VIDEO] | |
FetchSGD: Communication-Efficient Federated Learning with Sketching | UC Berkeley; Johns Hopkins University; Amazon | ICML | 2020 | FetchSGD[^FetchSGD] | [PUB] [PDF] [VIDEO] [CODE] |
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning | EPFL; Google | ICML | 2020 | SCAFFOLD[^SCAFFOLD] | [PUB] [PDF] [VIDEO] [UC.] [] |
Federated Learning with Only Positive Labels | ICML | 2020 | [PUB] [PDF] [VIDEO] | ||
From Local SGD to Local Fixed-Point Methods for Federated Learning | Moscow Institute of Physics and Technology; KAUST | ICML | 2020 | [PUB] [PDF] [SLIDE] [VIDEO] | |
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization | KAUST | ICML | 2020 | [PUB] [PDF] [SLIDE] [VIDEO] | |
Differentially-Private Federated Linear Bandits | MIT | NeurIPS | 2020 | [PUB] [PDF] [CODE] | |
Federated Principal Component Analysis | University of Cambridge; Quine Technologies | NeurIPS | 2020 | [PUB] [PDF] [CODE] | |
FedSplit: an algorithmic framework for fast federated optimization | UC Berkeley | NeurIPS | 2020 | FedSplit[^FedSplit] | [PUB] [PDF] |
Federated Bayesian Optimization via Thompson Sampling | NUS; MIT | NeurIPS | 2020 | fbo[^fbo] | [PUB] [PDF] [CODE] |
Lower Bounds and Optimal Algorithms for Personalized Federated Learning | KAUST | NeurIPS | 2020 | [PUB] [PDF] | |
Robust Federated Learning: The Case of Affine Distribution Shifts | UC Santa Barbara; MIT | NeurIPS | 2020 | RobustFL[^RobustFL] | [PUB] [PDF] [CODE] |
An Efficient Framework for Clustered Federated Learning | UC Berkeley; DeepMind | NeurIPS | 2020 | ifca[^ifca] | [PUB] [PDF] [CODE] |
Distributionally Robust Federated Averaging 🔥 | Pennsylvania State University | NeurIPS | 2020 | DRFA[^DRFA] | [PUB] [PDF] [CODE] |
Personalized Federated Learning with Moreau Envelopes 🔥 | The University of Sydney | NeurIPS | 2020 | [PUB] [PDF] [CODE] | |
Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach | MIT; UT Austin | NeurIPS | 2020 | Per-FedAvg[^Per-FedAvg] | [PUB] [PDF] [UC.] |
Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge | USC | NeurIPS | 2020 | FedGKT[^FedGKT] | [PUB] [PDF] [CODE] [] |
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization 🔥 | CMU; Princeton | NeurIPS | 2020 | FedNova[^FedNova] | [PUB] [PDF] [CODE] [UC.] |
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning | University of Wisconsin-Madison | NeurIPS | 2020 | [PUB] [PDF] | |
Federated Accelerated Stochastic Gradient Descent | Stanford | NeurIPS | 2020 | FedAc[^FedAc] | [PUB] [PDF] [CODE] [VIDEO] |
Inverting Gradients - How easy is it to break privacy in federated learning? 🔥 | University of Siegen | NeurIPS | 2020 | [PUB] [PDF] [CODE] | |
Ensemble Distillation for Robust Model Fusion in Federated Learning | EPFL | NeurIPS | 2020 | FedDF[^FedDF] | [PUB] [PDF] [CODE] |
Throughput-Optimal Topology Design for Cross-Silo Federated Learning | INRIA | NeurIPS | 2020 | [PUB] [PDF] [CODE] | |
Bayesian Nonparametric Federated Learning of Neural Networks 🔥 | IBM | ICML | 2019 | [PUB] [PDF] [CODE] | |
Analyzing Federated Learning through an Adversarial Lens 🔥 | Princeton; IBM | ICML | 2019 | [PUB] [PDF] [CODE] | |
Agnostic Federated Learning | ICML | 2019 | [PUB] [PDF] | ||
cpSGD: Communication-efficient and differentially-private distributed SGD | Princeton; Google | NeurIPS | 2018 | [PUB] [PDF] | |
Federated Multi-Task Learning 🔥 | Stanford; USC; CMU | NeurIPS | 2017 | [PUB] [PDF] [CODE] |
In this section, we will summarize Federated Learning papers accepted by top DM(Data Mining) conference and journal, Including KDD(ACM SIGKDD Conference on Knowledge Discovery and Data Mining) and WSDM(Web Search and Data Mining).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Federated Unlearning for On-Device Recommendation | UQ | WSDM | 2023 | [PUB] [PDF] | |
Collaboration Equilibrium in Federated Learning | THU | KDD | 2022 | CE[^CE] | [PUB] [PDF] [CODE] |
Connected Low-Loss Subspace Learning for a Personalization in Federated Learning | Ulsan National Institute of Science and Technology | KDD | 2022 | SuPerFed[^SuPerFed] | [PUB] [PDF] [CODE] |
FedMSplit: Correlation-Adaptive Federated Multi-Task Learning across Multimodal Split Networks | University of Virginia | KDD | 2022 | FedMSplit[^FedMSplit] | [PUB] |
Communication-Efficient Robust Federated Learning with Noisy Labels | University of Pittsburgh | KDD | 2022 | Comm-FedBiO[^Comm-FedBiO] | [PUB] [PDF] |
FLDetector: Detecting Malicious Clients in Federated Learning via Checking Model-Updates Consistency | USTC | KDD | 2022 | FLDetector[^FLDetector] | [PUB] [PDF] [CODE] |
Practical Lossless Federated Singular Vector Decomposition Over Billion-Scale Data | HKUST | KDD | 2022 | FedSVD[^FedSVD] | [PUB] [PDF] [CODE] |
FedWalk: Communication Efficient Federated Unsupervised Node Embedding with Differential Privacy | SJTU | KDD | 2022 | FedWalk[^FedWalk] | [PUB] [PDF] |
FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient Platform for Federated Graph Learning 🔥 | Alibaba | KDD (Best Paper Award) | 2022 | FederatedScope-GNN[^FederatedScope-GNN] | [PUB] [PDF] [CODE] |
Fed-LTD: Towards Cross-Platform Ride Hailing via Federated Learning to Dispatch | BUAA | KDD | 2022 | Fed-LTD[^Fed-LTD] | [PUB] [PDF] [] |
Felicitas: Federated Learning in Distributed Cross Device Collaborative Frameworks | USTC | KDD | 2022 | Felicitas[^Felicitas] | [PUB] [PDF] |
No One Left Behind: Inclusive Federated Learning over Heterogeneous Devices | Renmin University of China | KDD | 2022 | InclusiveFL[^InclusiveFL] | [PUB] [PDF] |
FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling | THU | KDD | 2022 | FedAttack[^FedAttack] | [PUB] [PDF] [CODE] |
PipAttack: Poisoning Federated Recommender Systems for Manipulating Item Promotion | The University of Queensland | WSDM | 2022 | PipAttack[^PipAttack] | [PUB] [PDF] |
Fed2: Feature-Aligned Federated Learning | George Mason University; Microsoft; University of Maryland | KDD | 2021 | Fed2[^Fed2] | [PUB] [PDF] |
FedRS: Federated Learning with Restricted Softmax for Label Distribution Non-IID Data | Nanjing University | KDD | 2021 | FedRS[^FedRS] | [PUB] [CODE] |
Federated Adversarial Debiasing for Fair and Trasnferable Representations | Michigan State University | KDD | 2021 | FADE[^FADE] | [PUB] [PAGE] [CODE] [SLIDE] |
Cross-Node Federated Graph Neural Network for Spatio-Temporal Data Modeling | USC | KDD | 2021 | CNFGNN[^CNFGNN] | [PUB] [CODE] [] |
AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization | Xidian University;JD Tech | KDD | 2021 | AsySQN[^AsySQN] | [PUB] [PDF] |
FLOP: Federated Learning on Medical Datasets using Partial Networks | Duke University | KDD | 2021 | FLOP[^FLOP] | [PUB] [PDF] [CODE] |
A Practical Federated Learning Framework for Small Number of Stakeholders | ETH Zrich | WSDM | 2021 | Federated-Learning-source[^Federated-Learning-source] | [PUB] [CODE] |
Federated Deep Knowledge Tracing | USTC | WSDM | 2021 | FDKT[^FDKT] | [PUB] [CODE] |
FedFast: Going Beyond Average for Faster Training of Federated Recommender Systems | University College Dublin | KDD | 2020 | FedFast[^FedFast] | [PUB] [VIDEO] |
Federated Doubly Stochastic Kernel Learning for Vertically Partitioned Data | JD Tech | KDD | 2020 | FDSKL[^FDSKL] | [PUB] [PDF] [VIDEO] |
Federated Online Learning to Rank with Evolution Strategies | Facebook AI Research | WSDM | 2019 | FOLtR-ES[^FOLtR-ES] | [PUB] [CODE] |
In this section, we will summarize Federated Learning papers accepted by top Secure conference and journal, Including S&P(IEEE Symposium on Security and Privacy), CCS(Conference on Computer and Communications Security), USENIX Security(Usenix Security Symposium) and NDSS(Network and Distributed System Security Symposium).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Securing Federated Sensitive Topic Classification against Poisoning Attacks | IMDEA Networks Institute | NDSS | 2023 | [PUB] [PDF] [CODE] | |
PPA: Preference Profiling Attack Against Federated Learning | NJUST | NDSS | 2023 | [PUB] [PDF] | |
CERBERUS: Exploring Federated Prediction of Security Events | UCL London | CCS | 2022 | [PUB] [PDF] | |
EIFFeL: Ensuring Integrity for Federated Learning | UW-Madison | CCS | 2022 | [PUB] [PDF] | |
Eluding Secure Aggregation in Federated Learning via Model Inconsistency | SPRING Lab; EPFL | CCS | 2022 | [PUB] [PDF] [CODE] | |
Federated Boosted Decision Trees with Differential Privacy | University of Warwick | CCS | 2022 | [PUB] [PDF] [CODE] | |
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information | Duke University | S&P | 2023 | FedRecover[^FedRecover] | [PUB] [PDF] |
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy | Fudan University | S&P | 2023 | PEA[^PEA] | [PUB] [PDF] |
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning | University of Massachusetts | S&P | 2022 | [PUB] [VIDEO] | |
SIMC: ML Inference Secure Against Malicious Clients at Semi-Honest Cost | Microsoft Research | USENIX Security | 2022 | SIMC[^SIMC] | [PUB] [PDF] [CODE] [VIDEO] [SUPP] |
Efficient Differentially Private Secure Aggregation for Federated Learning via Hardness of Learning with Errors | University of Vermont | USENIX Security | 2022 | [PUB] [SLIDE] [VIDEO] | |
Label Inference Attacks Against Vertical Federated Learning | ZJU | USENIX Security | 2022 | [PUB] [SLIDE] [CODE] [VIDEO] | |
FLAME: Taming Backdoors in Federated Learning | Technical University of Darmstadt | USENIX Security | 2022 | FLAME[^FLAME] | [PUB] [SLIDE] [PDF] [VIDEO] |
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning | University at Buffalo, SUNY | NDSS | 2022 | [PUB] [PDF] [VIDEO] [UC.] | |
Interpretable Federated Transformer Log Learning for Cloud Threat Forensics | University of the Incarnate Word | NDSS | 2022 | [PUB] [VIDEO] [UC.] | |
FedCRI: Federated Mobile Cyber-Risk Intelligence | Technical University of Darmstadt | NDSS | 2022 | FedCRI[^FedCRI] | [PUB] [VIDEO] |
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection | Technical University of Darmstadt | NDSS | 2022 | DeepSight[^DeepSight] | [PUB] [PDF] [VIDEO] |
Private Hierarchical Clustering in Federated Networks | NUS | CCS | 2021 | [PUB] [PDF] | |
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping | Duke University | NDSS | 2021 | [PUB] [PDF] [CODE] [VIDEO] [SLIDE] | |
POSEIDON: Privacy-Preserving Federated Neural Network Learning | EPFL | NDSS | 2021 | [PUB] [VIDEO] | |
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning | University of Massachusetts Amherst | NDSS | 2021 | [PUB] [CODE] [VIDEO] | |
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning | The Ohio State University | USENIX Security | 2020 | [PUB] [PDF] [CODE] [VIDEO] [SLIDE] | |
A Reliable and Accountable Privacy-Preserving Federated Learning Framework using the Blockchain | University of Kansas | CCS (Poster) | 2019 | [PUB] | |
IOTFLA : A Secured and Privacy-Preserving Smart Home Architecture Implementing Federated Learning | Universit du Qubc Montral | S&P Workshop | 2019 | [PUB] | |
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning 🔥 | University of Massachusetts Amherst | S&P | 2019 | [PUB] [VIDEO] [SLIDE] [CODE] | |
Practical Secure Aggregation for Privacy Preserving Machine Learning | CCS | 2017 | [PUB] [PDF] [] [UC.] [UC] |
In this section, we will summarize Federated Learning papers accepted by top CV(computer vision) conference and journal, Including CVPR(Computer Vision and Pattern Recognition), ICCV(IEEE International Conference on Computer Vision), ECCV(European Conference on Computer Vision), MM(ACM International Conference on Multimedia), IJCV(International Journal of Computer Vision).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Rethinking Federated Learning With Domain Shift: A Prototype View | WHU | CVPR | 2023 | [PUB] [CODE] | |
Class Balanced Adaptive Pseudo Labeling for Federated Semi-Supervised Learning | ECNU | CVPR | 2023 | [PUB] [CODE] | |
DaFKD: Domain-Aware Federated Knowledge Distillation | HUST | CVPR | 2023 | [PUB] [CODE] | |
The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning | Purdue University | CVPR | 2023 | [PUB] [PDF] | |
FedSeg: Class-Heterogeneous Federated Learning for Semantic Segmentation | ZJU | CVPR | 2023 | [PUB] | |
On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data | DTU | CVPR | 2023 | [PUB] [PDF] | |
Elastic Aggregation for Federated Optimization | Meituan | CVPR | 2023 | [PUB] | |
FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning | UCLA | CVPR | 2023 | [PUB] [PDF] | |
Adaptive Channel Sparsity for Federated Learning Under System Heterogeneity | UM | CVPR | 2023 | [PUB] | |
ScaleFL: Resource-Adaptive Federated Learning With Heterogeneous Clients | GaTech | CVPR | 2023 | [PUB] [CODE] | |
Reliable and Interpretable Personalized Federated Learning | TJU | CVPR | 2023 | [PUB] | |
Federated Domain Generalization With Generalization Adjustment | SJTU | CVPR | 2023 | [PUB] [CODE] | |
Make Landscape Flatter in Differentially Private Federated Learning | THU | CVPR | 2023 | [PUB] [PDF] [CODE] | |
Confidence-Aware Personalized Federated Learning via Variational Expectation Maximization | KU Leuven | CVPR | 2023 | [PUB] [PDF] [CODE] | |
STDLens: Model Hijacking-Resilient Federated Learning for Object Detection | GaTech | CVPR | 2023 | [PUB] [PDF] [CODE] | |
Re-Thinking Federated Active Learning Based on Inter-Class Diversity | KAIST | CVPR | 2023 | [PUB] [PDF] [CODE] | |
Learning Federated Visual Prompt in Null Space for MRI Reconstruction | A*STAR | CVPR | 2023 | [PUB] [PDF] [CODE] | |
Fair Federated Medical Image Segmentation via Client Contribution Estimation | CUHK | CVPR | 2023 | [PUB] [PDF] [CODE] | |
Federated Learning With Data-Agnostic Distribution Fusion | NJU | CVPR | 2023 | [PUB] [CODE] | |
How To Prevent the Poor Performance Clients for Personalized Federated Learning? | CSU | CVPR | 2023 | [PUB] | |
GradMA: A Gradient-Memory-Based Accelerated Federated Learning With Alleviated Catastrophic Forgetting | ECNU | CVPR | 2023 | [PUB] [PDF] [CODE] | |
Bias-Eliminating Augmentation Learning for Debiased Federated Learning | NTU | CVPR | 2023 | [PUB] | |
Federated Incremental Semantic Segmentation | CAS; UCAS | CVPR | 2023 | [PUB] [PDF] [CODE] | |
Confederated Learning: Going Beyond Centralization | CAS; UCAS | MM | 2022 | [PUB] | |
Few-Shot Model Agnostic Federated Learning | WHU | MM | 2022 | FSMAFL[^FSMAFL] | [PUB] [CODE] |
Feeling Without Sharing: A Federated Video Emotion Recognition Framework Via Privacy-Agnostic Hybrid Aggregation | TJUT | MM | 2022 | EmoFed[^EmoFed] | [PUB] |
FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks | ECCV | 2022 | [PUB] [SUPP] | ||
Auto-FedRL: Federated Hyperparameter Optimization for Multi-Institutional Medical Image Segmentation | ECCV | 2022 | [PUB] [SUPP] [PDF] [CODE] | ||
Improving Generalization in Federated Learning by Seeking Flat Minima | Politecnico di Torino | ECCV | 2022 | FedSAM[^FedSAM] | [PUB] [SUPP] [PDF] [CODE] |
AdaBest: Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation | ECCV | 2022 | [PUB] [SUPP] [PDF] [CODE] [PAGE] | ||
SphereFed: Hyperspherical Federated Learning | ECCV | 2022 | [PUB] [SUPP] [PDF] | ||
Federated Self-Supervised Learning for Video Understanding | ECCV | 2022 | [PUB] [PDF] [CODE] | ||
FedVLN: Privacy-Preserving Federated Vision-and-Language Navigation | ECCV | 2022 | [PUB] [SUPP] [PDF] [CODE] | ||
Addressing Heterogeneity in Federated Learning via Distributional Transformation | ECCV | 2022 | [PUB] [CODE] | ||
FedX: Unsupervised Federated Learning with Cross Knowledge Distillation | KAIST | ECCV | 2022 | FedX[^FedX] | [PUB] [SUPP] [PDF] [CODE] |
Personalizing Federated Medical Image Segmentation via Local Calibration | Xiamen University | ECCV | 2022 | LC-Fed[^LC-Fed] | [PUB] [SUPP] [PDF] [CODE] |
ATPFL: Automatic Trajectory Prediction Model Design Under Federated Learning Framework | HIT | CVPR | 2022 | ATPFL[^ATPFL] | [PUB] |
Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning | Stanford | CVPR | 2022 | ViT-FL[^ViT-FL] | [PUB] [SUPP] [PDF] [CODE] [VIDEO] |
FedCorr: Multi-Stage Federated Learning for Label Noise Correction | Singapore University of Technology and Design | CVPR | 2022 | FedCorr[^FedCorr] | [PUB] [SUPP] [PDF] [CODE] [VIDEO] |
FedCor: Correlation-Based Active Client Selection Strategy for Heterogeneous Federated Learning | Duke University | CVPR | 2022 | FedCor[^FedCor] | [PUB] [SUPP] [PDF] |
Layer-Wised Model Aggregation for Personalized Federated Learning | PolyU | CVPR | 2022 | pFedLA[^pFedLA] | [PUB] [SUPP] [PDF] |
Local Learning Matters: Rethinking Data Heterogeneity in Federated Learning | University of Central Florida | CVPR | 2022 | FedAlign[^FedAlign] | [PUB] [SUPP] [PDF] [CODE] |
Federated Learning With Position-Aware Neurons | Nanjing University | CVPR | 2022 | PANs[^PANs] | [PUB] [SUPP] [PDF] |
RSCFed: Random Sampling Consensus Federated Semi-Supervised Learning | HKUST | CVPR | 2022 | RSCFed[^RSCFed] | [PUB] [SUPP] [PDF] [CODE] |
Learn From Others and Be Yourself in Heterogeneous Federated Learning | Wuhan University | CVPR | 2022 | FCCL[^FCCL] | [PUB] [CODE] [VIDEO] |
Robust Federated Learning With Noisy and Heterogeneous Clients | Wuhan University | CVPR | 2022 | RHFL[^RHFL] | [PUB] [SUPP] [CODE] |
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning | Arizona State University | CVPR | 2022 | ResSFL[^ResSFL] | [PUB] [SUPP] [PDF] [CODE] |
FedDC: Federated Learning With Non-IID Data via Local Drift Decoupling and Correction | National University of Defense Technology | CVPR | 2022 | FedDC[^FedDC] | [PUB] [PDF] [CODE] [] |
Federated Class-Incremental Learning | CAS; Northwestern University; UTS | CVPR | 2022 | GLFC[^GLFC] | [PUB] [PDF] [CODE] |
Fine-Tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning | PKU; JD Explore Academy; The University of Sydney | CVPR | 2022 | FedFTG[^FedFTG] | [PUB] [PDF] |
Differentially Private Federated Learning With Local Regularization and Sparsification | CAS | CVPR | 2022 | DP-FedAvg+BLUR+LUS[^DP-FedAvgplusBLURplusLUS] | [PUB] [PDF] |
Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage | University of Tennessee; Oak Ridge National Laboratory; Google Research | CVPR | 2022 | GGL[^GGL] | [PUB] [PDF] [CODE] [VIDEO] |
CD2-pFed: Cyclic Distillation-Guided Channel Decoupling for Model Personalization in Federated Learning | SJTU | CVPR | 2022 | CD2-pFed[^CD2-pFed] | [PUB] [PDF] |
Closing the Generalization Gap of Cross-Silo Federated Medical Image Segmentation | Univ. of Pittsburgh; NVIDIA | CVPR | 2022 | FedSM[^FedSM] | [PUB] [PDF] |
Multi-Institutional Collaborations for Improving Deep Learning-Based Magnetic Resonance Image Reconstruction Using Federated Learning | Johns Hopkins University | CVPR | 2021 | FL-MRCM[^FL-MRCM] | [PUB] [PDF] [CODE] |
Model-Contrastive Federated Learning 🔥 | NUS; UC Berkeley | CVPR | 2021 | MOON[^MOON] | [PUB] [PDF] [CODE] [] |
FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space 🔥 | CUHK | CVPR | 2021 | FedDG-ELCFS[^FedDG-ELCFS] | [PUB] [PDF] [CODE] |
Soteria: Provable Defense Against Privacy Leakage in Federated Learning From Representation Perspective | Duke University | CVPR | 2021 | Soteria[^Soteria] | [PUB] [PDF] [CODE] |
Federated Learning for Non-IID Data via Unified Feature Learning and Optimization Objective Alignment | PKU | ICCV | 2021 | FedUFO[^FedUFO] | [PUB] |
Ensemble Attention Distillation for Privacy-Preserving Federated Learning | University at Buffalo | ICCV | 2021 | FedAD[^FedAD] | [PUB] [PDF] |
Collaborative Unsupervised Visual Representation Learning from Decentralized Data | NTU; SenseTime | ICCV | 2021 | FedU[^FedU] | [PUB] [PDF] |
Joint Optimization in Edge-Cloud Continuum for Federated Unsupervised Person Re-identification | NTU | MM | 2021 | FedUReID[^FedUReID] | [PUB] [PDF] |
Federated Visual Classification with Real-World Data Distribution | MIT; Google | ECCV | 2020 | FedVC+FedIR[^FedVCplusFedIR] | [PUB] [PDF] [VIDEO] |
InvisibleFL: Federated Learning over Non-Informative Intermediate Updates against Multimedia Privacy Leakages | MM | 2020 | InvisibleFL[^InvisibleFL] | [PUB] | |
Performance Optimization of Federated Person Re-identification via Benchmark Analysis data.
|
NTU | MM | 2020 | FedReID[^FedReID] | [PUB] [PDF] [CODE] [] |
In this section, we will summarize Federated Learning papers accepted by top AI and NLP conference and journal, including ACL(Annual Meeting of the Association for Computational Linguistics), NAACL(North American Chapter of the Association for Computational Linguistics), EMNLP(Conference on Empirical Methods in Natural Language Processing) and COLING(International Conference on Computational Linguistics).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation | PKU | EMNLP | 2022 | [PUB] [PDF] | |
Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation kg.
|
Lehigh University | EMNLP | 2022 | FedR[^FedR] | [PUB] [PDF] [CODE] |
Federated Continual Learning for Text Classification via Selective Inter-client Transfer | DRIMCo GmbH; LMU | EMNLP | 2022 | [PUB] [PDF] [CODE] | |
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling | SNU | EMNLP | 2022 | [PUB] [PDF] | |
A Federated Approach to Predicting Emojis in Hindi Tweets | University of Alberta | EMNLP | 2022 | [PUB] [PDF] [CODE] | |
Federated Model Decomposition with Private Vocabulary for Text Classification | HIT; Peng Cheng Lab | EMNLP | 2022 | [PUB] [CODE] | |
Federated Meta-Learning for Emotion and Sentiment Aware Multi-modal Complaint Identification | EMNLP | 2022 | [PUB] | ||
Fair NLP Models with Differentially Private Text Encoders | EMNLP | 2022 | [PUB] [PDF] [CODE] | ||
Scaling Language Model Size in Cross-Device Federated Learning | ACL workshop | 2022 | SLM-FL[^SLM-FL] | [PUB] [PDF] | |
Intrinsic Gradient Compression for Scalable and Efficient Federated Learning | Oxford | ACL workshop | 2022 | IGC-FL[^IGC-FL] | [PUB] [PDF] |
ActPerFL: Active Personalized Federated Learning | Amazon | ACL workshop | 2022 | ActPerFL[^ActPerFL] | [PUB] [PAGE] |
FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks 🔥 | USC | NAACL | 2022 | FedNLP[^FedNLP] | [PUB] [PDF] [CODE] |
Federated Learning with Noisy User Feedback | USC; Amazon | NAACL | 2022 | FedNoisy[^FedNoisy] | [PUB] [PDF] |
Training Mixed-Domain Translation Models via Federated Learning | Amazon | NAACL | 2022 | FedMDT[^FedMDT] | [PUB] [PAGE] [PDF] |
Pretrained Models for Multilingual Federated Learning | Johns Hopkins University | NAACL | 2022 | [PUB] [PDF] [CODE] | |
Training Mixed-Domain Translation Models via Federated Learning | Amazon | NAACL | 2022 | [PUB] [PAGE] [PDF] | |
Federated Chinese Word Segmentation with Global Character Associations | University of Washington | ACL workshop | 2021 | [PUB] [CODE] | |
Efficient-FedRec: Efficient Federated Learning Framework for Privacy-Preserving News Recommendation | USTC | EMNLP | 2021 | Efficient-FedRec[^Efficient-FedRec] | [PUB] [PDF] [CODE] [VIDEO] |
Improving Federated Learning for Aspect-based Sentiment Analysis via Topic Memories | CUHK (Shenzhen) | EMNLP | 2021 | [PUB] [CODE] [VIDEO] | |
A Secure and Efficient Federated Learning Framework for NLP | University of Connecticut | EMNLP | 2021 | [PUB] [PDF] [VIDEO] | |
Distantly Supervised Relation Extraction in Federated Settings | UCAS | EMNLP workshop | 2021 | [PUB] [PDF] [CODE] | |
Federated Learning with Noisy User Feedback | USC; Amazon | NAACL workshop | 2021 | [PUB] [PDF] | |
An Investigation towards Differentially Private Sequence Tagging in a Federated Framework | Universitt Hamburg | NAACL workshop | 2021 | [PUB] | |
Understanding Unintended Memorization in Language Models Under Federated Learning | NAACL workshop | 2021 | [PUB] [PDF] | ||
FedED: Federated Learning via Ensemble Distillation for Medical Relation Extraction | CAS | EMNLP | 2020 | [PUB] [VIDEO] [] | |
Empirical Studies of Institutional Federated Learning For Natural Language Processing | Ping An Technology | EMNLP workshop | 2020 | [PUB] | |
Federated Learning for Spoken Language Understanding | PKU | COLING | 2020 | [PUB] | |
Two-stage Federated Phenotyping and Patient Representation Learning | Boston Childrens Hospital Harvard Medical School | ACL workshop | 2019 | [PUB] [PDF] [CODE] [UC.] |
In this section, we will summarize Federated Learning papers accepted by top Information Retrieval conference and journal, including SIGIR(Annual International ACM SIGIR Conference on Research and Development in Information Retrieval).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Is Non-IID Data a Threat in Federated Online Learning to Rank? | The University of Queensland | SIGIR | 2022 | noniid-foltr[^noniid-foltr] | [PUB] [CODE] |
FedCT: Federated Collaborative Transfer for Recommendation | Rutgers University | SIGIR | 2021 | FedCT[^FedCT] | [PUB] [PDF] [CODE] |
On the Privacy of Federated Pipelines | Technical University of Munich | SIGIR | 2021 | FedGWAS[^FedGWAS] | [PUB] |
FedCMR: Federated Cross-Modal Retrieval. | Dalian University of Technology | SIGIR | 2021 | FedCMR[^FedCMR] | [PUB] [CODE] |
Meta Matrix Factorization for Federated Rating Predictions. | SDU | SIGIR | 2020 | MetaMF[^MetaMF] | [PUB] [PDF] |
In this section, we will summarize Federated Learning papers accepted by top Database conference and journal, including SIGMOD(ACM SIGMOD Conference) , ICDE(IEEE International Conference on Data Engineering) and VLDB(Very Large Data Bases Conference).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Differentially Private Vertical Federated Clustering. | Purdue University | VLDB | 2023 | [PUB] [PDF] [CODE] | |
FederatedScope: A Flexible Federated Learning Platform for Heterogeneity. 🔥 | Alibaba | VLDB | 2023 | [PUB] [PDF] [CODE] | |
Secure Shapley Value for Cross-Silo Federated Learning. | Kyoto University | VLDB | 2023 | [PUB] [PDF] [CODE] | |
OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization | ZJU | VLDB | 2022 | OpBoost[^OpBoost] | [PUB] [PDF] [CODE] |
Skellam Mixture Mechanism: a Novel Approach to Federated Learning with Differential Privacy. | NUS | VLDB | 2022 | SMM[^SMM] | [PUB] [CODE] |
Towards Communication-efficient Vertical Federated Learning Training via Cache-enabled Local Update | PKU | VLDB | 2022 | CELU-VFL[^CELU-VFL] | [PUB] [PDF] [CODE] |
FedTSC: A Secure Federated Learning System for Interpretable Time Series Classification. | HIT | VLDB | 2022 | FedTSC[^FedTSC] | [PUB] [CODE] |
Improving Fairness for Data Valuation in Horizontal Federated Learning | The UBC | ICDE | 2022 | CSFV[^CSFV] | [PUB] [PDF] |
FedADMM: A Robust Federated Deep Learning Framework with Adaptivity to System Heterogeneity | USTC | ICDE | 2022 | FedADMM[^FedADMM] | [PUB] [PDF] [CODE] |
FedMP: Federated Learning through Adaptive Model Pruning in Heterogeneous Edge Computing. | USTC | ICDE | 2022 | FedMP[^FedMP] | [PUB] |
Federated Learning on Non-IID Data Silos: An Experimental Study. 🔥 | NUS | ICDE | 2022 | ESND[^ESND] | [PUB] [PDF] [CODE] |
Enhancing Federated Learning with Intelligent Model Migration in Heterogeneous Edge Computing | USTC | ICDE | 2022 | FedMigr[^FedMigr] | [PUB] |
Samba: A System for Secure Federated Multi-Armed Bandits | Univ. Clermont Auvergne | ICDE | 2022 | Samba[^Samba] | [PUB] [CODE] |
FedRecAttack: Model Poisoning Attack to Federated Recommendation | ZJU | ICDE | 2022 | FedRecAttack[^FedRecAttack] | [PUB] [PDF] [CODE] |
Enhancing Federated Learning with In-Cloud Unlabeled Data | USTC | ICDE | 2022 | Ada-FedSemi[^Ada-FedSemi] | [PUB] |
Efficient Participant Contribution Evaluation for Horizontal and Vertical Federated Learning | USTC | ICDE | 2022 | DIG-FL[^DIG-FL] | [PUB] |
An Introduction to Federated Computation | University of Warwick; Facebook | SIGMOD Tutorial | 2022 | FCT[^FCT] | [PUB] |
BlindFL: Vertical Federated Machine Learning without Peeking into Your Data | PKU; Tencent | SIGMOD | 2022 | BlindFL[^BlindFL] | [PUB] [PDF] |
An Efficient Approach for Cross-Silo Federated Learning to Rank | BUAA | ICDE | 2021 | CS-F-LTR[^CS-F-LTR] | [PUB] [RELATED PAPER(ZH)] |
Feature Inference Attack on Model Predictions in Vertical Federated Learning | NUS | ICDE | 2021 | FIA[^FIA] | [PUB] [PDF] [CODE] |
Efficient Federated-Learning Model Debugging | USTC | ICDE | 2021 | FLDebugger[^FLDebugger] | [PUB] |
Federated Matrix Factorization with Privacy Guarantee | Purdue | VLDB | 2021 | FMFPG[^FMFPG] | [PUB] |
Projected Federated Averaging with Heterogeneous Differential Privacy. | Renmin University of China | VLDB | 2021 | PFA-DB[^PFA-DB] | [PUB] [CODE] |
Enabling SQL-based Training Data Debugging for Federated Learning | Simon Fraser University | VLDB | 2021 | FedRain-and-Frog[^FedRain-and-Frog] | [PUB] [PDF] [CODE] |
Refiner: A Reliable Incentive-Driven Federated Learning System Powered by Blockchain | ZJU | VLDB | 2021 | Refiner[^Refiner] | [PUB] |
Tanium Reveal: A Federated Search Engine for Querying Unstructured File Data on Large Enterprise Networks | Tanium Inc. | VLDB | 2021 | TaniumReveal[^TaniumReveal] | [PUB] [VIDEO] |
VF2Boost: Very Fast Vertical Federated Gradient Boosting for Cross-Enterprise Learning | PKU | SIGMOD | 2021 | VF2Boost[^VF2Boost] | [PUB] |
ExDRa: Exploratory Data Science on Federated Raw Data | SIEMENS | SIGMOD | 2021 | ExDRa[^ExDRa] | [PUB] |
Joint blockchain and federated learning-based offloading in harsh edge computing environments | TJU | SIGMOD workshop | 2021 | FLoffloading[^FLoffloading] | [PUB] |
Privacy Preserving Vertical Federated Learning for Tree-based Models | NUS | VLDB | 2020 | Pivot-DT[^Pivot-DT] | [PUB] [PDF] [VIDEO] [CODE] |
In this section, we will summarize Federated Learning papers accepted by top Database conference and journal, including SIGCOMM(Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication), INFOCOM(IEEE Conference on Computer Communications), MobiCom(ACM/IEEE International Conference on Mobile Computing and Networking), NSDI(Symposium on Networked Systems Design and Implementation) and WWW(The Web Conference).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
FLASH: Towards a High-performance Hardware Acceleration Architecture for Cross-silo Federated Learning | HKUST; Clustar | NSDI | 2023 | [PUB] [SLIDE] [VIDEO] | |
To Store or Not? Online Data Selection for Federated Learning with Limited Storage. | SJTU | WWW | 2023 | [PUB] [PDF] | |
pFedPrompt: Learning Personalized Prompt for Vision-Language Models in Federated Learning. | PolyU | WWW | 2023 | [PUB] | |
Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding. | ZJU; HIC-ZJU | WWW | 2023 | [PUB] [PDF] | |
Vertical Federated Knowledge Transfer via Representation Distillation for Healthcare Collaboration Networks | PKU | WWW | 2023 | [PUB] [PDF] [CODE] | |
Semi-decentralized Federated Ego Graph Learning for Recommendation | SUST | WWW | 2023 | [PUB] [PDF] | |
FlexiFed: Personalized Federated Learning for Edge Clients with Heterogeneous Model Architectures. | Swinburne | WWW | 2023 | [PUB] [CODE] | |
FedEdge: Accelerating Edge-Assisted Federated Learning. | Swinburne | WWW | 2023 | [PUB] | |
Federated Node Classification over Graphs with Latent Link-type Heterogeneity. | Emory University | WWW | 2023 | [PUB] [CODE] | |
FedACK: Federated Adversarial Contrastive Knowledge Distillation for Cross-Lingual and Cross-Model Social Bot Detection. | USTC | WWW | 2023 | [PUB] [PDF] [CODE] | |
Interaction-level Membership Inference Attack Against Federated Recommender Systems. | UQ | WWW | 2023 | [PUB] [PDF] | |
AgrEvader: Poisoning Membership Inference against Byzantine-robust Federated Learning. | Deakin University | WWW | 2023 | [PUB] | |
Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning. | NJU | WWW | 2023 | [PUB] [PDF] [CODE] | |
Federated Learning for Metaverse: A Survey. | JNU | WWW (Companion Volume) | 2023 | [PUB] [PDF] | |
Understanding the Impact of Label Skewness and Optimization on Federated Learning for Text Classification | KU Leuven | WWW (Companion Volume) | 2023 | [PUB] | |
Privacy-Preserving Online Content Moderation: A Federated Learning Use Case. | CUT | WWW (Companion Volume) | 2023 | [PUB] [PDF] | |
Privacy-Preserving Online Content Moderation with Federated Learning. | CUT | WWW (Companion Volume) | 2023 | [PUB] | |
A Federated Learning Benchmark for Drug-Target Interaction. | University of Turin | WWW (Companion Volume) | 2023 | [PUB] [PDF] [CODE] | |
Towards a Decentralized Data Hub and Query System for Federated Dynamic Data Spaces. | TU Berlin | WWW (Companion Volume) | 2023 | [PUB] | |
1st Workshop on Federated Learning Technologies1st Workshop on Federated Learning Technologies | University of Turin | WWW (Companion Volume) | 2023 | [PUB] | |
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness and Privacy | CUHK | WWW (Companion Volume) | 2023 | [PUB] [PDF] | |
A Hierarchical Knowledge Transfer Framework for Heterogeneous Federated Learning | THU | INFOCOM | 2023 | ||
A Reinforcement Learning Approach for Minimizing Job Completion Time in Clustered Federated Learning | Southeast University | INFOCOM | 2023 | ||
Adaptive Configuration for Heterogeneous Participants in Decentralized Federated Learning | USTC | INFOCOM | 2023 | FedHP[^FedHP] | [PDF] |
AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous Edge Devices | Guangdong University of Technology | INFOCOM | 2023 | AnycostFL[^AnycostFL] | [PDF] |
AOCC-FL: Federated Learning with Aligned Overlapping via Calibrated Compensation | HUST | INFOCOM | 2023 | AOCC-FL[^AOCC-FL] | |
Asynchronous Federated Unlearning | University of Toronto | INFOCOM | 2023 | KNOT[^KNOT] | [PDF] |
Communication-Efficient Federated Learning for Heterogeneous Edge Devices Based on Adaptive Gradient Quantization | PSU | INFOCOM | 2023 | [PDF] | |
Enabling Communication-Efficient Federated Learning via Distributed Compressed Sensing | Beihang University | INFOCOM | 2023 | ||
Federated Learning under Heterogeneous and Correlated Client Availability | Inria | INFOCOM | 2023 | CA-Fed[^CA-Fed] | [PDF] [CODE] |
Federated Learning with Flexible Control | IBM | INFOCOM | 2023 | FlexFL[^FlexFL] | [PDF] |
Federated PCA on Grassmann Manifold for Anomaly Detection in IoT Networks | The University of Sydney | INFOCOM | 2023 | [PDF] | |
FedMoS: Taming Client Drift in Federated Learning with Double Momentum and Adaptive Selection | HUST | INFOCOM | 2023 | FedMoS[^FedMoS] | [PDF] |
FedSDG-FS: Efficient and Secure Feature Selection for Vertical Federated Learning | NTU | INFOCOM | 2023 | FedSDG-FS[^FedSDG-FS] | |
Heterogeneity-Aware Federated Learning with Adaptive Client Selection and Gradient Compression | USTC | INFOCOM | 2023 | ||
Joint Edge Aggregation and Association for Cost-Efficient Multi-Cell Federated Learning | NUDT | INFOCOM | 2023 | ||
Joint Participation Incentive and Network Pricing Design for Federated Learning | Northwestern University | INFOCOM | 2023 | ||
More than Enough is Too Much: Adaptive Defenses against Gradient Leakage in Production Federated Learning | University of Toronto | INFOCOM | 2023 | OUTPOST[^OUTPOST] | [PDF] |
Network Adaptive Federated Learning: Congestion and Lossy Compression | UTAustin | INFOCOM | 2023 | NAC-FL[^NAC-FL] | [PDF] |
OBLIVION: Poisoning Federated Learning by Inducing Catastrophic Forgetting | The Hang Seng University of Hong Kong | INFOCOM | 2023 | OBLIVION[^OBLIVION] | |
Privacy as a Resource in Differentially Private Federated Learning | BUPT | INFOCOM | 2023 | ||
SplitGP: Achieving Both Generalization and Personalization in Federated Learning | KAIST | INFOCOM | 2023 | SplitGP[^SplitGP] | [PDF] |
SVDFed: Enabling Communication-Efficient Federated Learning via Singular-Value-Decomposition | Beihang University | INFOCOM | 2023 | SVDFed[^SVDFed] | |
Tackling System Induced Bias in Federated Learning: Stratification and Convergence Analysis | Southern University of Science and Technology | INFOCOM | 2023 | [PDF] | |
Toward Sustainable AI: Federated Learning Demand Response in Cloud-Edge Systems via Auctions | BUPT | INFOCOM | 2023 | [PDF] | |
Truthful Incentive Mechanism for Federated Learning with Crowdsourced Data Labeling | Auburn University | INFOCOM | 2023 | [PDF] | |
TVFL: Tunable Vertical Federated Learning towards Communication-Efficient Model Serving | USTC | INFOCOM | 2023 | TVFL[^TVFL] | |
PyramidFL: Fine-grained Data and System Heterogeneity-aware Client Selection for Efficient Federated Learning | MSU | MobiCom | 2022 | PyramidFL[^PyramidFL] | [PUB] [PDF] [CODE] |
NestFL: efficient federated learning through progressive model pruning in heterogeneous edge computing | pmlabs | MobiCom(Poster) | 2022 | [PUB] | |
Federated learning-based air quality prediction for smart cities using BGRU model | IITM | MobiCom(Poster) | 2022 | [PUB] | |
FedHD: federated learning with hyperdimensional computing | UCSD | MobiCom(Demo) | 2022 | [PUB] [CODE] | |
Joint Superposition Coding and Training for Federated Learning over Multi-Width Neural Networks | Korea University | INFOCOM | 2022 | SlimFL[^SlimFL] | [PUB] |
Towards Optimal Multi-Modal Federated Learning on Non-IID Data with Hierarchical Gradient Blending | University of Toronto | INFOCOM | 2022 | HGBFL[^HGBFL] | [PUB] |
Optimal Rate Adaption in Federated Learning with Compressed Communications | SZU | INFOCOM | 2022 | ORAFL[^ORAFL] | [PUB] [PDF] |
The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining. | CityU | INFOCOM | 2022 | RFFL[^RFFL] | [PUB] [PDF] |
Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling. | CUHK; AIRS ;Yale University | INFOCOM | 2022 | FLACS[^FLACS] | [PUB] [PDF] |
Communication-Efficient Device Scheduling for Federated Learning Using Stochastic Optimization | Army Research Laboratory, Adelphi | INFOCOM | 2022 | CEDSFL[^CEDSFL] | [PUB] [PDF] |
FLASH: Federated Learning for Automated Selection of High-band mmWave Sectors | NEU | INFOCOM | 2022 | FLASH[^FLASH] | [PUB] [CODE] |
A Profit-Maximizing Model Marketplace with Differentially Private Federated Learning | CUHK; AIRS | INFOCOM | 2022 | PMDPFL[^PMDPFL] | [PUB] |
Protect Privacy from Gradient Leakage Attack in Federated Learning | PolyU | INFOCOM | 2022 | PPGLFL[^PPGLFL] | [PUB] [SLIDE] |
FedFPM: A Unified Federated Analytics Framework for Collaborative Frequent Pattern Mining. | SJTU | INFOCOM | 2022 | FedFPM[^FedFPM] | [PUB] [CODE] |
An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning | SWJTU;THU | WWW | 2022 | PBPFL[^PBPFL] | [PUB] [PDF] [CODE] |
LocFedMix-SL: Localize, Federate, and Mix for Improved Scalability, Convergence, and Latency in Split Learning | Yonsei University | WWW | 2022 | LocFedMix-SL[^LocFedMix-SL] | [PUB] |
Federated Unlearning via Class-Discriminative Pruning | PolyU | WWW | 2022 | [PUB] [PDF] [CODE] | |
FedKC: Federated Knowledge Composition for Multilingual Natural Language Understanding | Purdue | WWW | 2022 | FedKC[^FedKC] | [PUB] |
Powering Multi-Task Federated Learning with Competitive GPU Resource Sharing. | WWW (Companion Volume) | 2022 | |||
Federated Bandit: A Gossiping Approach | University of California | SIGMETRICS | 2021 | Federated-Bandit[^Federated-Bandit] | [PUB] [PDF] |
Hermes: an efficient federated learning framework for heterogeneous mobile clients | Duke University | MobiCom | 2021 | Hermes[^Hermes] | [PUB] |
Federated mobile sensing for activity recognition | Samsung AI Center | MobiCom | 2021 | [PUB] [PAGE] [TALKS] [VIDEO] | |
Learning for Learning: Predictive Online Control of Federated Learning with Edge Provisioning. | Nanjing University | INFOCOM | 2021 | [PUB] | |
Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation. | Purdue | INFOCOM | 2021 | D2D-FedL[^D2D-FedL] | [PUB] [PDF] |
FAIR: Quality-Aware Federated Learning with Precise User Incentive and Model Aggregation | THU | INFOCOM | 2021 | FAIR[^FAIR] | [PUB] |
Sample-level Data Selection for Federated Learning | USTC | INFOCOM | 2021 | [PUB] | |
To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge Devices | Xidian University; CAS | INFOCOM | 2021 | [PUB] [PDF] | |
Cost-Effective Federated Learning Design | CUHK; AIRS; Yale University | INFOCOM | 2021 | [PUB] [PDF] | |
An Incentive Mechanism for Cross-Silo Federated Learning: A Public Goods Perspective | The UBC | INFOCOM | 2021 | [PUB] | |
Resource-Efficient Federated Learning with Hierarchical Aggregation in Edge Computing | USTC | INFOCOM | 2021 | [PUB] | |
FedServing: A Federated Prediction Serving Framework Based on Incentive Mechanism. | Jinan University; CityU | INFOCOM | 2021 | FedServing[^FedServing] | [PUB] [PDF] |
Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach | Arizona State University | INFOCOM | 2021 | [PUB] [PDF] | |
Dual Attention-Based Federated Learning for Wireless Traffic Prediction | King Abdullah University of Science and Technology | INFOCOM | 2021 | FedDA[^FedDA] | [PUB] [PDF] [CODE] |
FedSens: A Federated Learning Approach for Smart Health Sensing with Class Imbalance in Resource Constrained Edge Computing | University of Notre Dame | INFOCOM | 2021 | FedSens[^FedSens] | [PUB] |
P-FedAvg: Parallelizing Federated Learning with Theoretical Guarantees | SYSU; Guangdong Key Laboratory of Big Data Analysis and Processing | INFOCOM | 2021 | P-FedAvg[^P-FedAvg] | [PUB] |
Meta-HAR: Federated Representation Learning for Human Activity Recognition. | University of Alberta | WWW | 2021 | Meta-HAR[^Meta-HAR] | [PUB] [PDF] [CODE] |
PFA: Privacy-preserving Federated Adaptation for Effective Model Personalization | PKU | WWW | 2021 | PFA[^PFA] | [PUB] [PDF] [CODE] |
Communication Efficient Federated Generalized Tensor Factorization for Collaborative Health Data Analytics | Emory | WWW | 2021 | FedGTF-EF-PC[^FedGTF-EF-PC] | [PUB] [CODE] |
Hierarchical Personalized Federated Learning for User Modeling | USTC | WWW | 2021 | [PUB] | |
Characterizing Impacts of Heterogeneity in Federated Learning upon Large-Scale Smartphone Data | PKU | WWW | 2021 | Heter-aware[^Heter-aware] | [PUB] [PDF] [SLIDE] [CODE] |
Incentive Mechanism for Horizontal Federated Learning Based on Reputation and Reverse Auction | SYSU | WWW | 2021 | [PUB] | |
Physical-Layer Arithmetic for Federated Learning in Uplink MU-MIMO Enabled Wireless Networks. | Nanjing University | INFOCOM | 2020 | [PUB] | |
Optimizing Federated Learning on Non-IID Data with Reinforcement Learning 🔥 | University of Toronto | INFOCOM | 2020 | [PUB] [SLIDE] [CODE] [] | |
Enabling Execution Assurance of Federated Learning at Untrusted Participants | THU | INFOCOM | 2020 | [PUB] [CODE] | |
Billion-scale federated learning on mobile clients: a submodel design with tunable privacy | SJTU | MobiCom | 2020 | [PUB] | |
Federated Learning over Wireless Networks: Optimization Model Design and Analysis | The University of Sydney | INFOCOM | 2019 | [PUB] [CODE] | |
Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning | Wuhan University | INFOCOM | 2019 | [PUB] [PDF] [UC.] | |
InPrivate Digging: Enabling Tree-based Distributed Data Mining with Differential Privacy | Collaborative Innovation Center of Geospatial Technology | INFOCOM | 2018 | TFL[^TFL] | [PUB] |
In this section, we will summarize Federated Learning papers accepted by top Database conference and journal, including OSDI(USENIX Symposium on Operating Systems Design and Implementation), SOSP(Symposium on Operating Systems Principles), ISCA(International Symposium on Computer Architecture), MLSys(Conference on Machine Learning and Systems), TPDS(IEEE Transactions on Parallel and Distributed Systems), DAC(Design Automation Conference), TOCS(ACM Transactions on Computer Systems), TOS(ACM Transactions on Storage), TCAD(IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems), TC(IEEE Transactions on Computers).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Optimizing Training Efficiency and Cost of Hierarchical Federated Learning in Heterogeneous Mobile-Edge Cloud Computing | ECNU | TCAD | 2023 | [PUB] | |
Type-Aware Federated Scheduling for Typed DAG Tasks on Heterogeneous Multicore Platforms | TU Dortmund University | TC | 2023 | [PUB] [CODE] | |
Sandbox Computing: A Data Privacy Trusted Sharing Paradigm Via Blockchain and Federated Learning. | BUPT | TC | 2023 | [PUB] | |
Incentive Mechanism Design for Joint Resource Allocation in Blockchain-Based Federated Learning. | IUPUI | TPDS | 2023 | [PUB] [PDF] | |
HiFlash: Communication-Efficient Hierarchical Federated Learning With Adaptive Staleness Control and Heterogeneity-Aware Client-Edge Association. | TPDS | 2023 | [PUB] [PDF] | ||
From Deterioration to Acceleration: A Calibration Approach to Rehabilitating Step Asynchronism in Federated Optimization. | TPDS | 2023 | [PUB] [PDF] [CODE] | ||
Federated Learning Over Coupled Graphs | XJTU | TPDS | 2023 | [PUB] [PDF] | |
Privacy vs. Efficiency: Achieving Both Through Adaptive Hierarchical Federated Learning | NUDT | TPDS | 2023 | [PUB] | |
On Model Transmission Strategies in Federated Learning With Lossy Communications | SZU | TPDS | 2023 | [PUB] | |
Scheduling Algorithms for Federated Learning With Minimal Energy Consumption | University of Bordeaux | TPDS | 2023 | [PUB] [PDF] [CODE] | |
Auction-Based Cluster Federated Learning in Mobile Edge Computing Systems | HIT | TPDS | 2023 | [PUB] [PDF] | |
Personalized Edge Intelligence via Federated Self-Knowledge Distillation. | HUST | TPDS | 2023 | [PUB] [CODE] | |
Design of a Quantization-Based DNN Delta Compression Framework for Model Snapshots and Federated Learning. | HIT | TPDS | 2023 | [PUB] | |
Multi-Job Intelligent Scheduling With Cross-Device Federated Learning. | Baidu | TPDS | 2023 | [PUB] [PDF] | |
Data-Centric Client Selection for Federated Learning Over Distributed Edge Networks. | IIT | TPDS | 2023 | [PUB] | |
GossipFL: A Decentralized Federated Learning Framework With Sparsified and Adaptive Communication. | HKBU | TPDS | 2023 | [PUB] | |
FedMDS: An Efficient Model Discrepancy-Aware Semi-Asynchronous Clustered Federated Learning Framework. | CQU | TPDS | 2023 | [PUB] | |
HierFedML: Aggregator Placement and UE Assignment for Hierarchical Federated Learning in Mobile Edge Computing. | DUT | TPDS | 2023 | [PUB] | |
BAFL: A Blockchain-Based Asynchronous Federated Learning Framework | TC | 2022 | [PUB] [CODE] | ||
L4L: Experience-Driven Computational Resource Control in Federated Learning | TC | 2022 | [PUB] | ||
Adaptive Federated Learning on Non-IID Data With Resource Constraint | TC | 2022 | [PUB] | ||
Locking Protocols for Parallel Real-Time Tasks With Semaphores Under Federated Scheduling. | TCAD | 2022 | [PUB] | ||
Client Scheduling and Resource Management for Efficient Training in Heterogeneous IoT-Edge Federated Learning | ECNU | TCAD | 2022 | [PUB] | |
PervasiveFL: Pervasive Federated Learning for Heterogeneous IoT Systems. | ECNU | TCAD | 2022 | PervasiveFL[^PervasiveFL] | [PUB] |
FHDnn: communication efficient and robust federated learning for AIoT networks | UC San Diego | DAC | 2022 | FHDnn[^FHDnn] | [PUB] |
A Decentralized Federated Learning Framework via Committee Mechanism With Convergence Guarantee | SYSU | TPDS | 2022 | [PUB] [PDF] | |
Improving Federated Learning With Quality-Aware User Incentive and Auto-Weighted Model Aggregation | THU | TPDS | 2022 | [PUB] | |
$f$funcX: Federated Function as a Service for Science. | SUST | TPDS | 2022 | [PUB] [PDF] | |
Blockchain Assisted Decentralized Federated Learning (BLADE-FL): Performance Analysis and Resource Allocation | NUST | TPDS | 2022 | [PUB] [PDF] [CODE] | |
Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing. | CQU | TPDS | 2022 | [PUB] | |
TDFL: Truth Discovery Based Byzantine Robust Federated Learning | BIT | TPDS | 2022 | [PUB] | |
Federated Learning With Nesterov Accelerated Gradient | The University of Sydney | TPDS | 2022 | [PUB] [PDF] | |
FedGraph: Federated Graph Learning with Intelligent Sampling | UoA | TPDS | 2022 | FedGraph[^FedGraph] | [PUB] [CODE] [] |
AUCTION: Automated and Quality-Aware Client Selection Framework for Efficient Federated Learning. | THU | TPDS | 2022 | AUCTION[^AUCTION] | [PUB] |
DONE: Distributed Approximate Newton-type Method for Federated Edge Learning. | University of Sydney | TPDS | 2022 | DONE[^DONE] | [PUB] [PDF] [CODE] |
Flexible Clustered Federated Learning for Client-Level Data Distribution Shift. | CQU | TPDS | 2022 | FlexCFL[^FlexCFL] | [PUB] [PDF] [CODE] |
Min-Max Cost Optimization for Efficient Hierarchical Federated Learning in Wireless Edge Networks. | Xidian University | TPDS | 2022 | [PUB] | |
LightFed: An Efficient and Secure Federated Edge Learning System on Model Splitting. | CSU | TPDS | 2022 | LightFed[^LightFed] | [PUB] |
On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Federated Learning. | Purdue | TPDS | 2022 | Deli-CoCo[^Deli-CoCo] | [PUB] [PDF] [CODE] |
Incentive-Aware Autonomous Client Participation in Federated Learning. | Sun Yat-sen University | TPDS | 2022 | [PUB] | |
Communicational and Computational Efficient Federated Domain Adaptation. | HKUST | TPDS | 2022 | [PUB] | |
Decentralized Edge Intelligence: A Dynamic Resource Allocation Framework for Hierarchical Federated Learning. | NTU | TPDS | 2022 | [PUB] | |
Differentially Private Byzantine-Robust Federated Learning. | Qufu Normal University | TPDS | 2022 | DPBFL[^DPBFL] | [PUB] |
Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing. | University of Exeter | TPDS | 2022 | [PUB] [PDF] [CODE] | |
Reputation-Aware Hedonic Coalition Formation for Efficient Serverless Hierarchical Federated Learning. | BUAA | TPDS | 2022 | SHFL[^SHFL] | [PUB] |
Differentially Private Federated Temporal Difference Learning. | Stony Brook University | TPDS | 2022 | [PUB] | |
Towards Efficient and Stable K-Asynchronous Federated Learning With Unbounded Stale Gradients on Non-IID Data. | XJTU | TPDS | 2022 | WKAFL[^WKAFL] | [PUB] [PDF] |
Communication-Efficient Federated Learning With Compensated Overlap-FedAvg. | SCU | TPDS | 2022 | Overlap-FedAvg[^Overlap-FedAvg] | [PUB] [PDF] [CODE] |
PAPAYA: Practical, Private, and Scalable Federated Learning. | Meta AI | MLSys | 2022 | PAPAYA[^PAPAYA] | [PDF] [PUB] |
LightSecAgg: a Lightweight and Versatile Design for Secure Aggregation in Federated Learning | USC | MLSys | 2022 | LightSecAgg[^LightSecAgg] | [PDF] [PUB] [CODE] |
SAFA: A Semi-Asynchronous Protocol for Fast Federated Learning With Low Overhead | University of Warwick | TC | 2021 | SAFA[^SAFA] | [PDF] [PUB] [CODE] |
Efficient Federated Learning for Cloud-Based AIoT Applications | ECNU | TCAD | 2021 | [PUB] | |
HADFL: Heterogeneity-aware Decentralized Federated Learning Framework | USTC | DAC | 2021 | HADFL[^HADFL] | [PDF] [PUB] |
Helios: Heterogeneity-Aware Federated Learning with Dynamically Balanced Collaboration. | GMU | DAC | 2021 | Helios[^Helios] | [PDF] [PUB] |
FedLight: Federated Reinforcement Learning for Autonomous Multi-Intersection Traffic Signal Control. | ECNU | DAC | 2021 | FedLight[^FedLight] | [PUB] |
Oort: Efficient Federated Learning via Guided Participant Selection | University of Michigan | OSDI | 2021 | Oort[^Oort] | [PUB] [PDF] [CODE] [SLIDES] [VIDEO] |
Towards Efficient Scheduling of Federated Mobile Devices Under Computational and Statistical Heterogeneity. | Old Dominion University | TPDS | 2021 | [PUB] [PDF] | |
Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems. | CQU | TPDS | 2021 | Astraea[^Astraea] | [PUB] [CODE] |
An Efficiency-Boosting Client Selection Scheme for Federated Learning With Fairness Guarantee | SCUT | TPDS | 2021 | RBCS-F[^RBCS-F] | [PUB] [PDF] [] |
Proof of Federated Learning: A Novel Energy-Recycling Consensus Algorithm. | Beijing Normal University | TPDS | 2021 | PoFL[^PoFL] | [PUB] [PDF] |
Biscotti: A Blockchain System for Private and Secure Federated Learning. | UBC | TPDS | 2021 | Biscotti[^Biscotti] | [PUB] |
Mutual Information Driven Federated Learning. | Deakin University | TPDS | 2021 | [PUB] | |
Accelerating Federated Learning Over Reliability-Agnostic Clients in Mobile Edge Computing Systems. | University of Warwick | TPDS | 2021 | [PUB] [PDF] | |
FedSCR: Structure-Based Communication Reduction for Federated Learning. | HKU | TPDS | 2021 | FedSCR[^FedSCR] | [PUB] |
FedScale: Benchmarking Model and System Performance of Federated Learning 🔥 | University of Michigan | SOSP workshop / ICML 2022 | 2021 | FedScale[^FedScale] | [PUB] [PDF] [CODE] [] |
Redundancy in cost functions for Byzantine fault-tolerant federated learning | SOSP workshop | 2021 | [PUB] | ||
Towards an Efficient System for Differentially-private, Cross-device Federated Learning | SOSP workshop | 2021 | [PUB] | ||
GradSec: a TEE-based Scheme Against Federated Learning Inference Attacks | SOSP workshop | 2021 | [PUB] | ||
Community-Structured Decentralized Learning for Resilient EI. | SOSP workshop | 2021 | [PUB] | ||
Separation of Powers in Federated Learning (Poster Paper) | IBM Research | SOSP workshop | 2021 | TRUDA[^TRUDA] | [PUB] [PDF] |
Accelerating Federated Learning via Momentum Gradient Descent. | USTC | TPDS | 2020 | MFL[^MFL] | [PUB] [PDF] |
Towards Fair and Privacy-Preserving Federated Deep Models. | NUS | TPDS | 2020 | FPPDL[^FPPDL] | [PUB] [PDF] [CODE] |
Federated Optimization in Heterogeneous Networks 🔥 | CMU | MLSys | 2020 | FedProx[^FedProx] | [PUB] [PDF] [CODE] |
Towards Federated Learning at Scale: System Design | MLSys | 2019 | System_Design[^System_Design] | [PUB] [PDF] [] |
In this section, we will summarize Federated Learning papers accepted by top conference and journal in the other fields, including ICSE(International Conference on Software Engineering).
Title | Affiliation | Venue | Year | TL;DR | Materials |
---|---|---|---|---|---|
Note: SG means Support for Graph data and algorithms, ST means Support for Tabular data and algorithms.
Here's a really great Benchmark for the federated learning open source framework 👍 UniFed leaderboard, which present both qualitative and quantitative evaluation results of existing popular open-sourced FL frameworks, from the perspectives of functionality, usability, and system performance.
For more results, please refer to Framework Functionality Support
This section partially refers to repository Federated-Learning and FederatedAI research , the order of the surveys is arranged in reverse order according to the time of first submission (the latest being placed at the top)
[NeurIPS 2020] Federated Learning Tutorial [Web] [Slides] [Video]
Federated Learning on MNIST using a CNN, AI6101, 2020 (Demo Video)
[AAAI 2019] Federated Learning: User Privacy, Data Security and Confidentiality in Machine Learning
Private Image Analysis with MPC
Private Deep Learning with MPC
This section partially refers to The Federated Learning Portal.
More items will be added to the repository. Please feel free to suggest other key resources by opening an issue report, submitting a pull request, or dropping me an email @ ([email protected]). Enjoy reading!
Many thanks ❤️ to the other awesome list:
Federated Learning
Other fields
@misc{awesomeflGTD,
title = {Awesome-Federated-Learning-on-Graph-and-Tabular-Data},
author = {Yuwen Yang, Bingjie Yan, Xuefeng Jiang, Hongcheng Li, Jian Wang, Jiao Chen, Xiangmou Qu, Chang Liu and others},
year = {2022},
howpublished = {\\url{https://github.com/youngfish42/Awesome-Federated-Learning-on-Graph-and-Tabular-Data}
}
<script type="text/javascript" src="//rf.revolvermaps.com/0/0/8.js?i=5zw06d5f905&m=6&c=ff0000&cr1=ffffff&f=arial&l=33" async="async"></script>
[^HetVis]: A visual analytics tool, HetVis, for participating clients to explore data heterogeneity. We identify data heterogeneity through comparing prediction behaviors of the global federated model and the stand-alone model trained with local data. Then, a context-aware clustering of the inconsistent records is done, to provide a summary of data heterogeneity. Combining with the proposed comparison techniques, we develop a novel set of visualizations to identify heterogeneity issues in HFL(Horizontal federated learning). Het VisHFL()
[^FedStar]: From real-world graph datasets, we observe that some structural properties are shared by various domains, presenting great potential for sharing structural knowledge in FGL. Inspired by this, we propose FedStar, an FGL framework that extracts and shares the common underlying structure information for inter-graph federated learning tasks. To explicitly extract the structure information rather than encoding them along with the node features, we define structure embeddings and encode them with an independent structure encoder. Then, the structure encoder is shared across clients while the feature-based knowledge is learned in a personalized way, making FedStar capable of capturing more structure-based domain-invariant information and avoiding feature misalignment issues. We perform extensive experiments over both cross-dataset and cross-domain non-IID FGL settings. FedStarFGLFedStarIID FGL
[^FedGS]: Federated Graph-based Sampling (FedGS) to stabilize the global model update and mitigate the long-term bias given arbitrary client availability simultaneously. First, we model the data correlations of clients with a Data-Distribution-Dependency Graph (3DG) that helps keep the sampled clients data apart from each other, which is theoretically shown to improve the approximation to the optimal model update. Second, constrained by the far-distance in data distribution of the sampled clients, we further minimize the variance of the numbers of times that the clients are sampled, to mitigate long-term bias. Federated Graph-based SamplingFedGS-3DG
[^FL-GMT]: TBC
[^FedWalk]: FedWalk, a random-walk-based unsupervised node embedding algorithm that operates in such a node-level visibility graph with raw graph information remaining locally. FedWalk
[^FederatedScope-GNN]: FederatedScope-GNN present an easy-to-use FGL (federated graph learning) package. FederatedScope-GNNFGL
[^GAMF]: GAMF formulate the model fusion problem as a graph matching task, considering the second-order similarity of model weights instead of previous work merely formulating model fusion as a linear assignment problem. For the rising problem scale and multi-model consistency issues, GAMF propose an efficient graduated assignment-based model fusion method, iteratively updates the matchings in a consistency-maintaining manner. GAMFGAMF
[^MaKEr]: We study the knowledge extrapolation problem to embed new components (i.e., entities and relations) that come with emerging knowledge graphs (KGs) in the federated setting. In this problem, a model trained on an existing KG needs to embed an emerging KG with unseen entities and relations. To solve this problem, we introduce the meta-learning setting, where a set of tasks are sampled on the existing KG to mimic the link prediction task on the emerging KG. Based on sampled tasks, we meta-train a graph neural network framework that can construct features for unseen components based on structural information and output embeddings for them. KGsKGKGKGKG
[^SFL]: A novel structured federated learning (SFL) framework to enhance the knowledge-sharing process in PFL by leveraging the graph-based structural information among clients and learn both the global and personalized models simultaneously using client-wise relation graphs and clients' private data. We cast SFL with graph into a novel optimization problem that can model the client-wise complex relations and graph-based structural topology by a unified framework. Moreover, in addition to using an existing relation graph, SFL could be expanded to learn the hidden relations among clients. SFLPFLSFLSFL
[^VFGNN]: VFGNN, a federated GNN learning paradigm for privacy-preserving node classification task under data vertically partitioned setting, which can be generalized to existing GNN models. Specifically, we split the computation graph into two parts. We leave the private data (i.e., features, edges, and labels) related computations on data holders, and delegate the rest of computations to a semi-honest server. We also propose to apply differential privacy to prevent potential information leakage from the server. VFGNNGNNGNN
[^SpreadGNN]: SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature. We provide convergence guarantees and empirically demonstrate the efficacy of our framework on a variety of non-I.I.D. distributed graph-level molecular property prediction datasets with partial labels. SpreadGNNI.I.D.SpreadGNNGNN
[^FedGraph]: FedGraph for federated graph learning among multiple computing clients, each of which holds a subgraph. FedGraph provides strong graph learning capability across clients by addressing two unique challenges. First, traditional GCN training needs feature data sharing among clients, leading to risk of privacy leakage. FedGraph solves this issue using a novel cross-client convolution operation. The second challenge is high GCN training overhead incurred by large graph size. We propose an intelligent graph sampling algorithm based on deep reinforcement learning, which can automatically converge to the optimal sampling policies that balance training speed and accuracy. FedGraph FedGraphGCNFedGraphGCN
[^FGML]: FGML a comprehensive review of the literature in Federated Graph Machine Learning. FGML
[^GraphFL]: TBC
[^FedNI]: FedNI, to leverage network inpainting and inter-institutional data via FL. Specifically, we first federatively train missing node and edge predictor using a graph generative adversarial network (GAN) to complete the missing information of local networks. Then we train a global GCN node classifier across institutions using a federated graph learning platform. The novel design enables us to build more accurate machine learning models by leveraging federated learning and also graph learning approaches. FedNI FL GAN GCN
[^SemiGraphFL]: This work focuses on the graph classification task with partially labeled data. (1) Enhancing the collaboration processes: We propose a new personalized FL framework to deal with Non-IID data. Clients with more similar data have greater mutual influence, where the similarities can be evaluated via unlabeled data. (2) Enhancing the local training process: We introduce auxiliary loss for unlabeled data that restrict the training process. We propose a new pseudo-label strategy for our SemiGraphFL framework to make more effective predictions. (1) FLIID(2) SemiGraphFL
[^FedPerGNN]: FedPerGNN, a federated GNN framework for both effective and privacy-preserving personalization. Through a privacy-preserving model update method, we can collaboratively train GNN models based on decentralized graphs inferred from local data. To further exploit graph information beyond local interactions, we introduce a privacy-preserving graph expansion protocol to incorporate high-order information under privacy protection. FedPerGNNGNNGNN
[^GraphSniffer]: A graph neural network model based on federated learning named GraphSniffer to identify malicious transactions in the digital currency market. GraphSniffer leverages federated learning and graph neural networks to model graph-structured Bitcoin transaction data distributed at different worker nodes, and transmits the gradients of the local model to the server node for aggregation to update the parameters of the global model. GraphSniffer GraphSniffer
[^FedR]: In this paper, we first develop a novel attack that aims to recover the original data based on embedding information, which is further used to evaluate the vulnerabilities of FedE. Furthermore, we propose a Federated learning paradigm with privacy-preserving Relation embedding aggregation (FedR) to tackle the privacy issue in FedE. Compared to entity embedding sharing, relation embedding sharing policy can significantly reduce the communication cost due to its smaller size of queries. FedEFedRFedE
[^wirelessfl-pdgnet]: A data-driven approach for power allocation in the context of federated learning (FL) over interference-limited wireless networks. The power policy is designed to maximize the transmitted information during the FL process under communication constraints, with the ultimate objective of improving the accuracy and efficiency of the global FL model being trained. The proposed power allocation policy is parameterized using a graph convolutional network and the associated constrained optimization problem is solved through a primal-dual algorithm. FL-
[^multitask-fusion]: We investigate multi-task learning (MTL), where multiple learning tasks are performed jointly rather than separately to leverage their similarities and improve performance. We focus on the federated multi-task linear regression setting, where each machine possesses its own data for individual tasks and sharing the full local data between machines is prohibited. Motivated by graph regularization, we propose a novel fusion framework that only requires a one-shot communication of local estimates. Our method linearly combines the local estimates to produce an improved estimate for each task, and we show that the ideal mixing weight for fusion is a function of task similarity and task difficulty. MTL
[^FedEC]: FedEC framework, a local training procedure is responsible for learning knowledge graph embeddings on each client based on a specific embedding learner. We apply embedding-contrastive learning to limit the embedding update for tackling data heterogeneity. Moreover, a global update procedure is used for sharing and averaging entity embeddings on the master server. FedEC
[^PNS-FGL]: Existing FL paradigms are inefficient for geo-distributed GCN training since neighbour sampling across geo-locations will soon dominate the whole training process and consume large WAN bandwidth. We derive a practical federated graph learning algorithm, carefully striking the trade-off among GCN convergence error, wall-clock runtime, and neighbour sampling interval. Our analysis is divided into two cases according to the budget for neighbour sampling. In the unconstrained case, we obtain the optimal neighbour sampling interval, that achieves the best trade-off between convergence and runtime; in the constrained case, we show that determining the optimal sampling interval is actually an online problem and we propose a novel online algorithm with bounded competitive ratio to solve it. Combining the two cases, we propose a unified algorithm to decide the neighbour sampling interval in federated graph learning, and demonstrate its effectiveness with extensive simulation over graph datasets. FLGCNGCNwall - clock
[^DA-MRG]: Social bot detection is essential for the social network's security. Existing methods almost ignore the differences in bot behaviors in multiple domains. Thus, we first propose a DomainAware detection method with Multi-Relational Graph neural networks (DA-MRG) to improve detection performance. Specifically, DA-MRG constructs multi-relational graphs with users' features and relationships, obtains the user presentations with graph embedding and distinguishes bots from humans with domainaware classifiers. Meanwhile, considering the similarity between bot behaviors in different social networks, we believe that sharing data among them could boost detection performance. However, the data privacy of users needs to be strictly protected. To overcome the problem, we implement a study of federated learning framework for DA-MRG to achieve data sharing between different social networks and protect data privacy simultaneously. (DA-MRG)Domain AwareDA-MRGDA-MRG
[^DP-FedRec]: The DP-based federated GNN has not been well investigated, especially in the sub-graph-level setting, such as the scenario of recommendation system. DP-FedRec, a DP-based federated GNN to fill the gap. Private Set Intersection (PSI) is leveraged to extend the local graph for each client, and thus solve the non-IID problem. Most importantly, DP(differential privacy) is applied not only on the weights but also on the edges of the intersection graph from PSI to fully protect the privacy of clients. DPGNNDP-FedRecDPGNNPSIIIDDPPSI
[^FedGCN]: TBC
[^CTFL]: C lustering-based hierarchical and T wo-step- optimized FL (CTFL) employs a divide-and-conquer strategy, clustering clients based on the closeness of their local model parameters. Furthermore, we incorporate the particle swarm optimization algorithm in CTFL, which employs a two-step strategy for optimizing local models. This technique enables the central server to upload only one representative local model update from each cluster, thus reducing the communication overhead associated with model update transmission in the FL. FL ( CTFL )CTFLFL
[^FML-ST]: A privacy-preserving spatial-temporal prediction technique via federated learning (FL). Due to inherent non-independent identically distributed (non-IID) characteristic of spatial-temporal data, the basic FL-based method cannot deal with this data heterogeneity well by sharing global model; furthermore, we propose the personalized federated learning methods based on meta-learning. We automatically construct the global spatial-temporal pattern graph under a data federation. This global pattern graph incorporates and memorizes the local learned patterns of all of the clients, and each client leverages those global patterns to customize its own model by evaluating the difference between global and local pattern graph. Then, each client could use this customized parameters as its model initialization parameters for spatial-temporal prediction tasks. (FL)(non-IID)FL
[^BiG-Fed]: We investigate FL scenarios in which data owners are related by a network topology (e.g., traffic prediction based on sensor networks). Existing personalized FL approaches cannot take this information into account. To address this limitation, we propose the Bilevel Optimization enhanced Graph-aided Federated Learning (BiG-Fed) approach. The inner weights enable local tasks to evolve towards personalization, and the outer shared weights on the server side target the non-i.i.d problem enabling individual tasks to evolve towards a global constraint space. To the best of our knowledge, BiG-Fed is the first bilevel optimization technique to enable FL approaches to cope with two nested optimization tasks at the FL server and FL clients simultaneously. FL FL BiG-Fed BiG-Fed FL FL FL
[^FL-ST]: We explore the threat of collusion attacks from multiple malicious clients who pose targeted attacks (e.g., label flipping) in a federated learning configuration. By leveraging client weights and the correlation among them, we develop a graph-based algorithm to detect malicious clients.
[^FLITplus]: Federated learning allows end users to build a global model collaboratively while keeping their training data isolated. We rst simulate a heterogeneous federated-learning benchmark (FedChem) by jointly performing scaffold splitting and latent Dirichlet allocation on existing datasets. Our results on FedChem show that signicant learning challenges arise when working with heterogeneous molecules across clients. We then propose a method to alleviate the problem: Federated Learning by Instance reweighTing (FLIT+). FLIT+ can align local training across clients. Experiments conducted on FedChem validate the advantages of this method. FedChem FedChemFLIT + FLIT+FedChem
[^ML-FGL]: Deep learning-based Wi-Fi indoor fingerprint localization, which requires a large received signal strength (RSS) dataset for training. A multi-level federated graph learning and self-attention based personalized indoor localization method is proposed to further capture the intrinsic features of RSS(received signal strength), and learn the aggregation manner of shared information uploaded by clients, with better personalization accuracy. Wi-Fi( RSS )RSS()
[^PSO-GFML]: This paper proposes a decentralized online multitask learning algorithm based on GFL (O-GFML). Clients update their local models using continuous streaming data while clients and multiple servers can train different but related models simul-taneously. Furthermore, to enhance the communication efficiency of O-GFML, we develop a partial-sharing-based O-GFML (PSO-GFML). The PSO-GFML allows participating clients to exchange only a portion of model parameters with their respective servers during a global iteration, while non-participating clients update their local models if they have access to new data. GFL (O-GFML)O-GFMLO-GFML (PSO-GFML)PSO-GFML
[^FTL-NGCF]: TBC
[^DNG-FR]: AI healthcare applications rely on sensitive electronic healthcare records (EHRs) that are scarcely labelled and are often distributed across a network of the symbiont institutions. In this work, we propose dynamic neural graphs based federated learning framework to address these challenges. The proposed framework extends Reptile , a model agnostic meta-learning (MAML) algorithm, to a federated setting. However, unlike the existing MAML algorithms, this paper proposes a dynamic variant of neural graph learning (NGL) to incorporate unlabelled examples in the supervised training setup. Dynamic NGL computes a meta-learning update by performing supervised learning on a labelled training example while performing metric learning on its labelled or unlabelled neighbourhood. This neighbourhood of a labelled example is established dynamically using local graphs built over the batches of training examples. Each local graph is constructed by comparing the similarity between embedding generated by the current state of the model. The introduction of metric learning on the neighbourhood makes this framework semi-supervised in nature. The experimental results on the publicly available MIMIC-III dataset highlight the effectiveness of the proposed framework for both single and multi-task settings under data decentralisation constraints and limited supervision. ( EHR )(MAML)ReptileMAML(Neural Graph LearningNGL NGL
[^FedGCN-NES]: A Federated Learning-Based Graph Convolutional Network (FedGCN). First, we propose a Graph Convolutional Network (GCN) as a local model of FL. Based on the classical graph convolutional neural network, TopK pooling layers and full connection layers are added to this model to improve the feature extraction ability. Furthermore, to prevent pooling layers from losing information, cross-layer fusion is used in the GCN, giving FL an excellent ability to process non-Euclidean spatial data. Second, in this paper, a federated aggregation algorithm based on an online adjustable attention mechanism is proposed. The trainable parameter is introduced into the attention mechanism. The aggregation method assigns the corresponding attention coefficient to each local model, which reduces the damage caused by the inefficient local model parameters to the global model and improves the fault tolerance and accuracy of the FL algorithm. (Fedgcn)(GCN)FLTop KGCNFLFL
[^Feddy]: Distributed surveillance systems have the ability to detect, track, and snapshot objects moving around in a certain space. The systems generate video data from multiple personal devices or street cameras. Intelligent video-analysis models are needed to learn dynamic representation of the objects for detection and tracking. In this work, we introduce Federated Dynamic Graph Neural Network (Feddy), a distributed and secured framework to learn the object representations from graph sequences: (1) It aggregates structural information from nearby objects in the current graph as well as dynamic information from those in the previous graph. It uses a self-supervised loss of predicting the trajectories of objects. (2) It is trained in a federated learning manner. The centrally located server sends the model to user devices. Local models on the respective user devices learn and periodically send their learning to the central server without ever exposing the users data to server. (3) Studies showed that the aggregated parameters could be inspected though decrypted when broadcast to clients for model synchronizing, after the server performed a weighted average. Feddy(1) (2) (3)
[^D2D-FedL]: Two important characteristics of contemporary wireless networks: (i) the network may contain heterogeneous communication/computation resources, while (ii) there may be significant overlaps in devices' local data distributions. In this work, we develop a novel optimization methodology that jointly accounts for these factors via intelligent device sampling complemented by device-to-device (D2D) offloading. Our optimization aims to select the best combination of sampled nodes and data offloading configuration to maximize FedL training accuracy subject to realistic constraints on the network topology and device capabilities. Theoretical analysis of the D2D offloading subproblem leads to new FedL convergence bounds and an efficient sequential convex optimizer. Using this result, we develop a sampling methodology based on graph convolutional networks (GCNs) which learns the relationship between network attributes, sampled nodes, and resulting offloading that maximizes FedL accuracy. ( i )/( ii )(D2D)FedLD2DFedL(GCN)FedL
[^GCFL]: Graphs can also be regarded as a special type of data samples. We analyze real-world graphs from different domains to confirm that they indeed share certain graph properties that are statistically significant compared with random graphs. However, we also find that different sets of graphs, even from the same domain or same dataset, are non-IID regarding both graph structures and node features. A graph clustered federated learning (GCFL) framework that dynamically finds clusters of local systems based on the gradients of GNNs, and theoretically justify that such clusters can reduce the structure and feature heterogeneity among graphs owned by the local systems. Moreover, we observe the gradients of GNNs to be rather fluctuating in GCFL which impedes high-quality clustering, and design a gradient sequence-based clustering mechanism based on dynamic time warping (GCFL+). IID(GCFL)GNNsGNNsGCFL(GCFL+)
[^FedSage]: In this work, towards the novel yet realistic setting of subgraph federated learning, we propose two major techniques: (1) FedSage, which trains a GraphSage model based on FedAvg to integrate node features, link structures, and task labels on multiple local subgraphs; (2) FedSage+, which trains a missing neighbor generator along FedSage to deal with missing links across local subgraphs. (1) FedSageFedAvgGraphSage(2) FedSage +FedSage
[^CNFGNN]: Cross-Node Federated Graph Neural Network (CNFGNN) , a federated spatio-temporal model, which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. CNFGNNGNNCNFGNN
[^FKGE]: A novel decentralized scalable learning framework, Federated Knowledge Graphs Embedding (FKGE), where embeddings from different knowledge graphs can be learnt in an asynchronous and peer-to-peer manner while being privacy-preserving. FKGE exploits adversarial generation between pairs of knowledge graphs to translate identical entities and relations of different domains into near embedding spaces. In order to protect the privacy of the training data, FKGE further implements a privacy-preserving neural network structure to guarantee no raw data leakage. (FKGE)FKGEFKGE
[^D-FedGNN]: A new Decentralized Federated Graph Neural Network (D-FedGNN for short) which allows multiple participants to train a graph neural network model without a centralized server. Specifically, D-FedGNN uses a decentralized parallel stochastic gradient descent algorithm DP-SGD to train the graph neural network model in a peer-to-peer network structure. To protect privacy during model aggregation, D-FedGNN introduces the Diffie-Hellman key exchange method to achieve secure model aggregation between clients. (D-FedGNN)D-FedGNNDP-SGDD-FedGNNDiffie-Hellman
[^FedSGC]: We study the vertical and horizontal settings for federated learning on graph data. We propose FedSGC to train the Simple Graph Convolution model under three data split scenarios. FedSGC
[^FL-DISCO]: A holistic collaborative and privacy-preserving FL framework, namely FL-DISCO, which integrates GAN and GNN to generate molecular graphs. GANGNNFLFL-DISCO
[^FASTGNN]: We introduce a differential privacy-based adjacency matrix preserving approach for protecting the topological information. We also propose an adjacency matrix aggregation approach to allow local GNN-based models to access the global network for a better training effect. Furthermore, we propose a GNN-based model named attention-based spatial-temporal graph neural networks (ASTGNN) for traffic speed forecasting. We integrate the proposed federated learning framework and ASTGNN as FASTGNN for traffic speed forecasting. GNNGNN(ASTGNN)ASTGNNFASTGNN
[^DAG-FL]: In order to address device asynchrony and anomaly detection in FL while avoiding the extra resource consumption caused by blockchain, this paper introduces a framework for empowering FL using Direct Acyclic Graph (DAG)-based blockchain systematically (DAG-FL). FL(DAG, Direct Acyclic Graph)FL(DAG-FL)
[^FedE]: In this paper, we introduce federated setting to keep Multi-Source KGs' privacy without triple transferring between KGs(Knowledge graphs) and apply it in embedding knowledge graph, a typical method which have proven effective for KGC(Knowledge Graph Completion) in the past decade. We propose a Federated Knowledge Graph Embedding framework FedE, focusing on learning knowledge graph embeddings by aggregating locally-computed updates. KGsKGs ()KGC()FedE
[^FKE]: A new federated framework FKE for representation learning of knowledge graphs to deal with the problem of privacy protection and heterogeneous data. FKE
[^GFL]: GFL, A private multi-server federated learning scheme, which we call graph federated learning. We use cryptographic and differential privacy concepts to privatize the federated learning algorithm over a graph structure. We further show under convexity and Lipschitz conditions, that the privatized process matches the performance of the non-private algorithm. GFL Lipschitz
[^FeSoG]: A novel framework Fedrated Social recommendation with Graph neural network (FeSoG). Firstly, FeSoG adopts relational attention and aggregation to handle heterogeneity. Secondly, FeSoG infers user embeddings using local data to retain personalization.The proposed model employs pseudo-labeling techniques with item sampling to protect the privacy and enhance training. (FeSoG) FeSoG FeSoG
[^FedGraphNN]: FedGraphNN, an open FL benchmark system that can facilitate research on federated GNNs. FedGraphNN is built on a unified formulation of graph FL and contains a wide range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support. FedGraphNNFLGNNFedGraphNNFLGNNFL
[^Fed-CBT]: The connectional brain template (CBT) is a compact representation (i.e., a single connectivity matrix) multi-view brain networks of a given population. CBTs are especially very powerful tools in brain dysconnectivity diagnosis as well as holistic brain mapping if they are learned properly i.e., occupy the center of the given population. We propose the first federated connectional brain template learning (Fed-CBT) framework to learn how to integrate multi-view brain connectomic datasets collected by different hospitals into a single representative connectivity map. First, we choose a random fraction of hospitals to train our global model. Next, all hospitals send their model weights to the server to aggregate them. We also introduce a weighting method for aggregating model weights to take full benefit from all hospitals. Our model to the best of our knowledge is the first and only federated pipeline to estimate connectional brain templates using graph neural networks. (CBT)(,)CBTs- -( Fed-CBT )
[^FedCG-MD]: A novel Cluster-driven Graph Federated Learning (FedCG). In FedCG, clustering serves to address statistical heterogeneity, while Graph Convolutional Networks (GCNs) enable sharing knowledge across them. FedCG: i) identifies the domains via an FL-compliant clustering and instantiates domain-specific modules (residual branches) for each domain; ii) connects the domain-specific modules through a GCN at training to learn the interactions among domains and share knowledge; and iii) learns to cluster unsupervised via teacher-student classifier-training iterations and to address novel unseen test domains via their domain soft-assignment scores. FedCG FedCG (GCN) FedCGi FL ii) GCN iii-
[^FedGNN]: Graph neural network (GNN) is widely used for recommendation to model high-order interactions between users and items.We propose a federated framework for privacy-preserving GNN-based recommendation, which can collectively train GNN models from decentralized user data and meanwhile exploit high-order user-item interaction information with privacy well protected. GNN GNN GNN - -
[^DFL-PENS]: We study the problem of how to efficiently learn a model in a peer-to-peer system with non-iid client data. We propose a method named Performance-Based Neighbor Selection (PENS) where clients with similar data distributions detect each other and cooperate by evaluating their training losses on each other's data to learn a model suitable for the local data distribution. Performance-Based Neighbor SelectionPENS
[^Glint]: We study federated graph learning (FGL) under the cross-silo setting where several servers are connected by a wide-area network, with the objective of improving the Quality-of-Service (QoS) of graph learning tasks. Glint, a decentralized federated graph learning system with two novel designs: network traffic throttling and priority-based flows scheduling. FGLQoS Glint
[^FGNN]: A novel distributed scalable federated graph neural network (FGNN) to solve the cross-graph node classification problem. We add PATE mechanism into the domain adversarial neural network (DANN) to construct a cross-network node classification model, and extract effective information from node features of source and target graphs for encryption and spatial alignment. Moreover, we use a one-to-one approach to construct cross-graph node classification models for multiple source graphs and the target graph. Federated learning is used to train the model jointly through multi-party cooperation to complete the target graph node classification task. (FGNN) DANNPATE
[^GraFeHTy]: Human Activity Recognition (HAR) from sensor measurements is still challenging due to noisy or lack of la-belled examples and issues concerning data privacy. We propose a novel algorithm GraFeHTy, a Graph Convolution Network (GCN) trained in a federated setting. We construct a similarity graph from sensor measurements for each user and apply a GCN to perform semi-supervised classification of human activities by leveraging inter-relatedness and closeness of activities. (HAR) GraFeHTy (GCN) GCN
[^D-GCN]: The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. GCN GCN GCN GCN
[^FL-DSGD]: We focus on improving the communication efficiency for fully decentralized federated learning (DFL) over a graph, where the algorithm performs local updates for several iterations and then enables communications among the nodes. DFL
[^ASFGNN]: An Automated Separated-Federated Graph Neural Network (ASFGNN) learning paradigm. ASFGNN consists of two main components, i.e., the training of GNN and the tuning of hyper-parameters. Specifically, to solve the data Non-IID problem, we first propose a separated-federated GNN learning model, which decouples the training of GNN into two parts: the message passing part that is done by clients separately, and the loss computing part that is learnt by clients federally. To handle the time-consuming parameter tuning problem, we leverage Bayesian optimization technique to automatically tune the hyper-parameters of all the clients. ( ASFGNN )ASFGNNGNNNon - IIDGNNGNN
[^DSGD]: Communication is a critical enabler of large-scale FL due to significant amount of model information exchanged among edge devices. In this paper, we consider a network of wireless devices sharing a common fading wireless channel for the deployment of FL. Each device holds a generally distinct training set, and communication typically takes place in a Device-to-Device (D2D) manner. In the ideal case in which all devices within communication range can communicate simultaneously and noiselessly, a standard protocol that is guaranteed to converge to an optimal solution of the global empirical risk minimization problem under convexity and connectivity assumptions is Decentralized Stochastic Gradient Descent (DSGD). DSGD integrates local SGD steps with periodic consensus averages that require communication between neighboring devices. In this paper, wireless protocols are proposed that implement DSGD by accounting for the presence of path loss, fading, blockages, and mutual interference. The proposed protocols are based on graph coloring for scheduling and on both digital and analog transmission strategies at the physical layer, with the latter leveraging over-the-air computing via sparsity-based recovery. FL FL (D2D) DSGD DSGD SGD DSGD
[^SGNN]: We propose a similarity-based graph neural network model, SGNN, which captures the structure information of nodes precisely in node classification tasks. It also takes advantage of the thought of federated learning to hide the original information from different data sources to protect users' privacy. We use deep graph neural network with convolutional layers and dense layers to classify the nodes based on their structures and features. SGNN
[^FGL-DFC]: To detect financial misconduct, A methodology to share key information across institutions by using a federated graph learning platform that enables us to build more accurate machine learning models by leveraging federated learning and also graph learning approaches. We demonstrated that our federated model outperforms local model by 20% with the UK FCA TechSprint data set. FCA TechSprint 20%
[^cPDS]: We aim at solving a binary supervised classification problem to predict hospitalizations for cardiac events using a distributed algorithm. We focus on the soft-margin l1-regularized sparse Support Vector Machine (sSVM) classifier. We develop an iterative cluster Primal Dual Splitting (cPDS) algorithm for solving the large-scale sSVM problem in a decentralized fashion. l1 (sSVM) Primal Dual Splitting (cPDS) sSVM
[^GFL-APPNP]: We first formulate the Graph Federated Learning (GFL) problem that unifies LoG(Learning on Graphs) and FL(Federated Learning) in multi-client systems and then propose sharing hidden representation instead of the raw data of neighbors to protect data privacy as a solution. To overcome the biased gradient problem in GFL, we provide a gradient estimation method and its convergence analysis under the non-convex objective. LoG()FL (Federation Learning)(Graph Federation LearningGFL)GFL
[^FedRule]: TBC
[^M3FGM]: TBC
[^FedEgo]: FedEgo, a federated graph learning framework based on ego-graphs, where each client will train their local models while also contributing to the training of a global model. FedEgo applies GraphSAGE over ego-graphs to make full use of the structure information and utilizes Mixup for privacy concerns. To deal with the statistical heterogeneity, we integrate personalization into learning and propose an adaptive mixing coefficient strategy that enables clients to achieve their optimal personalization. FedEgoFedEgoGraphSAGEMixup
[^FGCL]: TBC
[^FD-GATDR]: TBC
[^EF-HC]: TBC
[^PPSGCN]: TBC
[^FedVGCN]: TBC
[^FedGL]: TBC
[^FL-AGCNS]: TBC
[^dFedU]: TBC
[^FedAlign-KG]: TBC
[^P2P-FLG]: TBC
[^SGBoost]: An efficient and privacy-preserving vertical federated tree boosting framework, namely SGBoost, where multiple participants can collaboratively perform model training and query without staying online all the time. Specifically, we first design secure bucket sharing and best split finding algorithms, with which the global tree model can be constructed over vertically partitioned data; meanwhile, the privacy of training data can be well guaranteed. Then, we design an oblivious query algorithm to utilize the trained model without leaking any query data or results. Moreover, SGBoost does not require multi-round interactions between participants, significantly improving the system efficiency. Detailed security analysis shows that SGBoost can well guarantee the privacy of raw data, weights, buckets, and split information. SGBoostSGBoostSGBoost
[^iFedCrowd]: iFedCrowd (incentive-boosted Federated Crowdsourcing), to manage the privacy and quality of crowdsourcing projects. iFedCrowd allows participants to locally process sensitive data and only upload encrypted training models, and then aggregates the model parameters to build a shared server model to protect data privacy. To motivate workers to build a high-quality global model in an efficacy way, we introduce an incentive mechanism that encourages workers to constantly collect fresh data to train accurate client models and boosts the global model training. We model the incentive-based interaction between the crowdsourcing platform and participating workers as a Stackelberg game, in which each side maximizes its own profit. We derive the Nash Equilibrium of the game to find the optimal solutions for the two sides. iFedCrowdiFedCrowdStackelberg
[^OpBoost]: OpBoost. Three order-preserving desensitization algorithms satisfying a variant of LDP called distance-based LDP (dLDP) are designed to desensitize the training data. In particular, we optimize the dLDP definition and study efficient sampling distributions to further improve the accuracy and efficiency of the proposed algorithms. The proposed algorithms provide a trade-off between the privacy of pairs with large distance and the utility of desensitized values. OpBoostLDPLDPdLDPdLDP
[^RevFRF]: TBC
[^FFGB]: Federated functional gradient boosting (FFGB). Under appropriate assumptions on the weak learning oracle, the FFGB algorithm is proved to efficiently converge to certain neighborhoods of the global optimum. The radii of these neighborhoods depend upon the level of heterogeneity measured via the total variation distance and the much tighter Wasserstein-1 distance, and diminish to zero as the setting becomes more homogeneous.
[^FRF]: Federated Random Forests (FRF) models, focusing particularly on the heterogeneity within and between datasets. FRF
[^federation-boosting]: This paper proposes FL algorithms that build federated models without relying on gradient descent-based methods. Specifically, we leverage distributed versions of the AdaBoost algorithm to acquire strong federated models. In contrast with previous approaches, our proposal does not put any constraint on the client-side learning models. FLAdaBoost
[^FF]: Federated Forest , which is a lossless learning model of the traditional random forest method, i.e., achieving the same level of accuracy as the non-privacy-preserving approach. Based on it, we developed a secure cross-regional machine learning system that allows a learning process to be jointly trained over different regions clients with the same user samples but different attribute sets, processing the data stored in each of them without exchanging their raw data. A novel prediction algorithm was also proposed which could largely reduce the communication overhead. Federated Forest
[^Fed-GBM]: Fed-GBM (Federated Gradient Boosting Machines), a cost-effective collaborative learning framework, consisting of two-stage voting and node-level parallelism, to address the problems in co-modelling for Non-intrusive load monitoring (NILM). Fed-GBM()(NILM)
[^VPRF]: A verifiable privacy-preserving scheme (VPRF) based on vertical federated Random forest, in which the users are dynamic change. First, we design homomorphic comparison and voting statistics algorithms based on multikey homomorphic encryption for privacy preservation. Then, we propose a multiclient delegated computing verification algorithm to make up for the disadvantage that the above algorithms cannot verify data integrity. VPRF
[^BOFRF]: A novel federated ensemble classification algorithm for horizontally partitioned data, namely Boosting-based Federated Random Forest (BOFRF), which not only increases the predictive power of all participating sites, but also provides significantly high improvement on the predictive power of sites having unsuccessful local models. We implement a federated version of random forest, which is a well-known bagging algorithm, by adapting the idea of boosting to it. We introduce a novel aggregation and weight calculation methodology that assigns weights to local classifiers based on their classification performance at each site without increasing the communication or computation cost. Boosting (BOFRF) . boosting bagging
[^eFL-Boost]: Efficient FL for GBDT (eFL-Boost), which minimizes accuracy loss, communication costs, and information leakage. The proposed scheme focuses on appropriate allocation of local computation (performed individually by each organization) and global computation (performed cooperatively by all organizations) when updating a model. A tree structure is determined locally at one of the organizations, and leaf weights are calculated globally by aggregating the local gradients of all organizations. Specifically, eFL-Boost requires only three communications per update, and only statistical information that has low privacy risk is leaked to other organizations. GBDTFLeFL-BoosteFL-Boost
[^MP-FedXGB]: MP-FedXGB, a lossless multi-party federated XGB learning framework is proposed with a security guarantee, which reshapes the XGBoost's split criterion calculation process under a secret sharing setting and solves the leaf weight calculation problem by leveraging distributed optimization. MP-FedXGBXGBXGBoost
[^FL-RF]: Random Forest Based on Federated Learning for Intrusion Detection
[^FL-DT]: A federated decision tree-based random forest algorithm where a small number of organizations or industry companies collaboratively build models.
[^FL-ST]: We explore the threat of collusion attacks from multiple malicious clients who pose targeted attacks (e.g., label flipping) in a federated learning configuration. By leveraging client weights and the correlation among them, we develop a graph-based algorithm to detect malicious clients.
[^VF2Boost]: VF2Boost, a novel and efficient vertical federated GBDT system. First, to handle the deficiency caused by frequent mutual-waiting in federated training, we propose a concurrent training protocol to reduce the idle periods. Second, to speed up the cryptography operations, we analyze the characteristics of the algorithm and propose customized operations. Empirical results show that our system can be 12.8-18.9 times faster than the existing vertical federated implementations and support much larger datasets. VF2BoostGBDT12.8-18.9LyapunovC2MABRBCS-F
[^SecureBoost]: SecureBoost, a novel lossless privacy-preserving tree-boosting system. SecureBoost first conducts entity alignment under a privacy-preserving protocol and then constructs boosting trees across multiple parties with a carefully designed encryption strategy. This federated learning system allows the learning process to be jointly conducted over multiple parties with common user samples but different feature sets, which corresponds to a vertically partitioned data set. SecureBoostSecureBoost
[^BFF-IDS]: A Blockchain-Based Federated Forest for SDN-Enabled In-Vehicle Network Intrusion Detection System SDN
[^I-GBDT]: An improved gradient boosting decision tree (GBDT) federated ensemble learning method is proposed, which takes the average gradient of similar samples and its own gradient as a new gradient to improve the accuracy of the local model. Different ensemble learning methods are used to integrate the parameters of the local model, thus improving the accuracy of the updated global model. GBDT
[^Fed-EINI]: Decision tree ensembles such as gradient boosting decision trees (GBDT) and random forest are widely applied powerful models with high interpretability and modeling efficiency. However, state-of-art framework for decision tree ensembles in vertical federated learning frameworks adapt anonymous features to avoid possible data breaches, makes the interpretability of the model compromised. Fed-EINI make a problem analysis about the necessity of disclosure meanings of feature to Guest Party in vertical federated learning. Fed-EINI protect data privacy and allow the disclosure of feature meaning by concealing decision paths and adapt a communication-efficient secure computation method for inference outputs. GBDTFed-EINIFed-EINI
[^GBF-Cen]: Propose a new tree-boosting method, named Gradient Boosting Forest (GBF), where the single decision tree in each gradient boosting round of GBDT is replaced by a set of trees trained from different subsets of the training data (referred to as a forest), which enables training GBDT in Federated Learning scenarios. We empirically prove that GBF outperforms the existing GBDT methods in both centralized (GBF-Cen) and federated (GBF-Fed) cases. GBFGBDTGBDTGBFGBF-CenGBF-FedGBDT
[^KA-FL]: A privacy-preserving framework using Mondrian k-anonymity with decision trees for the horizontally partitioned data. Mondrian K-
[^AF-DNDF]: AF-DNDF which extends DNDF (Deep Neural Decision Forests, which unites classification trees with the representation learning functionality from deep convolutional neural networks) with an asynchronous federated aggregation protocol. Based on the local quality of each classification tree, our architecture can select and combine the optimal groups of decision trees from multiple local devices. AF-DNDFDNDF
[^CB-DP]: Differential Privacy is used to obtain theoretically sound privacy guarantees against such inference attacks by noising the exchanged update vectors. However, the added noise is proportional to the model size which can be very large with modern neural networks. This can result in poor model quality. Compressive sensing is used to reduce the model size and hence increase model quality without sacrificing privacy.
[^SimFL]: A practical horizontal federated environment with relaxed privacy constraints. In this environment, a dishonest party might obtain some information about the other parties' data, but it is still impossible for the dishonest party to derive the actual raw data of other parties. Specifically, each party boosts a number of trees by exploiting similarity information based on locality-sensitive hashing.
[^Pivot-DT]: Pivot, a novel solution for privacy preserving vertical decision tree training and prediction, ensuring that no intermediate information is disclosed other than those the clients have agreed to release (i.e., the final tree model and the prediction output). Pivot does not rely on any trusted third party and provides protection against a semi-honest adversary that may compromise m - 1 out of m clients. We further identify two privacy leakages when the trained decision tree model is released in plain-text and propose an enhanced protocol to mitigate them. The proposed solution can also be extended to tree ensemble models, e.g., random forest (RF) and gradient boosting decision tree (GBDT) by treating single decision trees as building blocks. PivotPivotmm-1RFGBDT
[^FEDXGB]: FEDXGB, a federated extreme gradient boosting (XGBoost) scheme supporting forced aggregation. First, FEDXGB involves a new HE(homomorphic encryption) based secure aggregation scheme for FL. Then, FEDXGB extends FL to a new machine learning model by applying the secure aggregation scheme to the classification and regression tree building of XGBoost. FEDXGBXGBoostFEDXGBHEFLFEDXGBXGBoostFL
[^FedCluster]: FedCluster, a novel federated learning framework with improved optimization efficiency, and investigate its theoretical convergence properties. The FedCluster groups the devices into multiple clusters that perform federated learning cyclically in each learning round. FedClusterFedCluster
[^FL-XGBoost]: The proposed FL-XGBoost can train a sensitive task to be solved among different entities without revealing their own data. The proposed FL-XGBoost can achieve significant reduction in the number of communications between entities by exchanging decision tree models. FL-XGBoostFL-XGBoost
[^FL-PON]: A bandwidth slicing algorithm in PONs(passive optical network) is introduced for efficient FL, in which bandwidth is reserved for the involved ONUs(optical network units) collaboratively and mapped into each polling cycle. PONsFLONU
[^DFedForest]: A distributed machine learning system based on local random forest algorithms created with shared decision trees through the blockchain.
[^DRC-tree]: A decentralized redundant n-Cayley tree (DRC-tree) for federated learning. Explore the hierarchical structure of the n-Cayley tree to enhance the redundancy rate in federated learning to mitigate the impact of stragglers. In the DRC- tree structure, the fusion node serves as the root node, while all the worker devices are the intermediate tree nodes and leaves that formulated through a distributed message passing interface. the redundancy of workers is constructed layer by layer with a given redundancy branch degree. n-CayleyDRC-treen-CayleyDRC-
[^Fed-sGBM]: Fed-sGBM, a federated soft gradient boosting machine framework applicable on the streaming data. Compared with traditional gradient boosting methods, where base learners are trained sequentially, each base learner in the proposed framework can be efficiently trained in a parallel and distributed fashion. Fed-sGBM
[^FL-DNDF]: Deep neural decision forests (DNDF), combine the divide-and-conquer principle together with the property representation learning. By parameterizing the probability distributions in the prediction nodes of the forest, and include all trees of the forest in the loss function, a gradient of the whole forest can be computed which some/several federated learning algorithms utilize. DNDF/
[^Fed-TDA]: A federated tabular data augmentation method, named Fed-TDA. The core idea of Fed-TDA is to synthesize tabular data for data augmentation using some simple statistics (e.g., distributions of each column and global covariance). Specifically, we propose the multimodal distribution transformation and inverse cumulative distribution mapping respectively synthesize continuous and discrete columns in tabular data from a noise according to the pre-learned statistics. Furthermore, we theoretically analyze that our Fed-TDA not only preserves data privacy but also maintains the distribution of the original data and the correlation between columns. Fed-TDAFed-TDAFed-TDA
[^TabLeak]: Most high-stakes applications of FL (e.g., legal and financial) use tabular data. Compared to the NLP and image domains, reconstruction of tabular data poses several unique challenges: (i) categorical features introduce a significantly more difficult mixed discrete-continuous optimization problem, (ii) the mix of categorical and continuous features causes high variance in the final reconstructions, and (iii) structured data makes it difficult for the adversary to judge reconstruction quality. In this work, we tackle these challenges and propose the first comprehensive reconstruction attack on tabular data, called TabLeak. TabLeak is based on three key ingredients: (i) a softmax structural prior, implicitly converting the mixed discrete-continuous optimization problem into an easier fully continuous one, (ii) a way to reduce the variance of our reconstructions through a pooled ensembling scheme exploiting the structure of tabular data, and (iii) an entropy measure which can successfully assess reconstruction quality. FLNLPi-iiiiiTabLeakTabLeak(i) softmax-(ii) (iii)
[^Hercules]: TBC
[^FedGBF]: TBC
[^HFL-XGBoost]: A hybrid federated learning framework based on XGBoost, for distributed power prediction from real-time external features. In addition to introducing boosted trees to improve accuracy and interpretability, we combine horizontal and vertical federated learning, to address the scenario where features are scattered in local heterogeneous parties and samples are scattered in various local districts. Moreover, we design a dynamic task allocation scheme such that each party gets a fair share of information, and the computing power of each party can be fully leveraged to boost training efficiency. XGBoost
[^EBHE-VFXGB]: Efficient XGBoost vertical federated learning. we proposed a novel batch homomorphic encryption method to cut the cost of encryption-related computation and transmission in nearly half. This is achieved by encoding the first-order derivative and the second-order derivative into a single number for encryption, ciphertext transmission, and homomorphic addition operations. The sum of multiple first-order derivatives and second-order derivatives can be simultaneously decoded from the sum of encoded values. XGBoost
[^SecureBoostplus]: TBC
[^Fed-TGAN]: TBC
[^FedXGBoost]: Two variants of federated XGBoost with privacy guarantee: FedXGBoost-SMM and FedXGBoost-LDP. Our first protocol FedXGBoost-SMM deploys enhanced secure matrix multiplication method to preserve privacy with lossless accuracy and lower overhead than encryption-based techniques. Developed independently, the second protocol FedXGBoost-LDP is heuristically designed with noise perturbation for local differential privacy. XGBoostFedXGBoost-SMMFedXGBoost-LDPFedXGBoost-SMMFedXGBoost-LDP
[^FederBoost]: FederBoost for private federated learning of gradient boosting decision trees (GBDT). It supports running GBDT over both horizontally and vertically partitioned data. The key observation for designing FederBoost is that the whole training process of GBDT relies on the order of the data instead of the values. Consequently, vertical FederBoost does not require any cryptographic operation and horizontal FederBoost only requires lightweight secure aggregation. FederBoostGBDTGBDTFederBoostGBDTFederBoostFederBoost
[^F-XGBoost]: A horizontal federated XGBoost algorithm to solve the federated anomaly detection problem, where the anomaly detection aims to identify abnormalities from extremely unbalanced datasets and can be considered as a special classification problem. Our proposed federated XGBoost algorithm incorporates data aggregation and sparse federated update processes to balance the tradeoff between privacy and learning performance. In particular, we introduce the virtual data sample by aggregating a group of users' data together at a single distributed node. XGBoostXGBoost
[^FedDis]: With the advent of deep learning and increasing use of brain MRIs, a great amount of interest has arisen in automated anomaly segmentation to improve clinical workflows; however, it is time-consuming and expensive to curate medical imaging. FedDis to collaboratively train an unsupervised deep convolutional autoencoder on 1,532 healthy magnetic resonance scans from four different institutions, and evaluate its performance in identifying pathologies such as multiple sclerosis, vascular lesions, and low- and high-grade tumours/glioblastoma on a total of 538 volumes from six different institutions. To mitigate the statistical heterogeneity among different institutions, we disentangle the parameter space into global (shape) and local (appearance). Four institutes jointly train shape parameters to model healthy brain anatomical structures. Every institute trains appearance parameters locally to allow for client-specific personalization of the global domain-invariant features. MRI FedDis 1,532 538 /
[^FL-healthy]: This progress has emphasized that, from model development to model deployment, data play central roles. In this Review, we provide a data-centric view of the innovations and challenges that are defining ML for healthcare. We discuss deep generative models and federated learning as strategies to augment datasets for improved model performance, as well as the use of the more recent transformer models for handling larger datasets and enhancing the modelling of clinical text. We also discuss data-focused problems in the deployment of ML, emphasizing the need to efficiently deliver data to ML models for timely clinical predictions and to account for natural data shifts that can deteriorate model performance. MLMLML
[^FedPerGNN]: FedPerGNN, a federated GNN framework for both effective and privacy-preserving personalization. Through a privacy-preserving model update method, we can collaboratively train GNN models based on decentralized graphs inferred from local data. To further exploit graph information beyond local interactions, we introduce a privacy-preserving graph expansion protocol to incorporate high-order information under privacy protection. FedPerGNNGNNGNN
[^FedStar]: From real-world graph datasets, we observe that some structural properties are shared by various domains, presenting great potential for sharing structural knowledge in FGL. Inspired by this, we propose FedStar, an FGL framework that extracts and shares the common underlying structure information for inter-graph federated learning tasks. To explicitly extract the structure information rather than encoding them along with the node features, we define structure embeddings and encode them with an independent structure encoder. Then, the structure encoder is shared across clients while the feature-based knowledge is learned in a personalized way, making FedStar capable of capturing more structure-based domain-invariant information and avoiding feature misalignment issues. We perform extensive experiments over both cross-dataset and cross-domain non-IID FGL settings. FedStarFGLFedStarIID FGL
[^FedGS]: Federated Graph-based Sampling (FedGS) to stabilize the global model update and mitigate the long-term bias given arbitrary client availability simultaneously. First, we model the data correlations of clients with a Data-Distribution-Dependency Graph (3DG) that helps keep the sampled clients data apart from each other, which is theoretically shown to improve the approximation to the optimal model update. Second, constrained by the far-distance in data distribution of the sampled clients, we further minimize the variance of the numbers of times that the clients are sampled, to mitigate long-term bias. Federated Graph-based SamplingFedGS-3DG
[^iFedCrowd]: iFedCrowd (incentive-boosted Federated Crowdsourcing), to manage the privacy and quality of crowdsourcing projects. iFedCrowd allows participants to locally process sensitive data and only upload encrypted training models, and then aggregates the model parameters to build a shared server model to protect data privacy. To motivate workers to build a high-quality global model in an efficacy way, we introduce an incentive mechanism that encourages workers to constantly collect fresh data to train accurate client models and boosts the global model training. We model the incentive-based interaction between the crowdsourcing platform and participating workers as a Stackelberg game, in which each side maximizes its own profit. We derive the Nash Equilibrium of the game to find the optimal solutions for the two sides. iFedCrowdiFedCrowdStackelberg
[^FLIX]: TBC
[^DP-SCAFFOLD]: TBC
[^SparseFed]: TBC
[^QLSD]: TBC
[^MaKEr]: We study the knowledge extrapolation problem to embed new components (i.e., entities and relations) that come with emerging knowledge graphs (KGs) in the federated setting. In this problem, a model trained on an existing KG needs to embed an emerging KG with unseen entities and relations. To solve this problem, we introduce the meta-learning setting, where a set of tasks are sampled on the existing KG to mimic the link prediction task on the emerging KG. Based on sampled tasks, we meta-train a graph neural network framework that can construct features for unseen components based on structural information and output embeddings for them. KGsKGKGKGKG
[^SFL]: A novel structured federated learning (SFL) framework to enhance the knowledge-sharing process in PFL by leveraging the graph-based structural information among clients and learn both the global and personalized models simultaneously using client-wise relation graphs and clients' private data. We cast SFL with graph into a novel optimization problem that can model the client-wise complex relations and graph-based structural topology by a unified framework. Moreover, in addition to using an existing relation graph, SFL could be expanded to learn the hidden relations among clients. SFLPFLSFLSFL
[^VFGNN]: VFGNN, a federated GNN learning paradigm for privacy-preserving node classification task under data vertically partitioned setting, which can be generalized to existing GNN models. Specifically, we split the computation graph into two parts. We leave the private data (i.e., features, edges, and labels) related computations on data holders, and delegate the rest of computations to a semi-honest server. We also propose to apply differential privacy to prevent potential information leakage from the server. VFGNNGNNGNN
[^Fed-ET]: TBC
[^CReFF]: TBC
[^FedCG]: TBC
[^FedDUAP]: TBC
[^SpreadGNN]: SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature. We provide convergence guarantees and empirically demonstrate the efficacy of our framework on a variety of non-I.I.D. distributed graph-level molecular property prediction datasets with partial labels. SpreadGNNI.I.D.SpreadGNNGNN
[^SmartIdx]: TBC
[^FedFIM]: TBC
[^FedProto]: TBC
[^FedSoft]: TBC
[^FedFR]: TBC
[^SplitFed]: TBC
[^FlyNNFL]: TBC
[^FedSpeech]: TBC
[^FedKT]: TBC
[^FEDMD-NFDP]: TBC
[^LDP-FL]: TBC
[^FedFV]: TBC
[^H-FL]: TBC
[^FedRecplusplus]: TBC
[^FLAME_D]: TBC
[^FedAMP]: TBC
[^FedGame]: TBC
[^SimFL]: A practical horizontal federated environment with relaxed privacy constraints. In this environment, a dishonest party might obtain some information about the other parties' data, but it is still impossible for the dishonest party to derive the actual raw data of other parties. Specifically, each party boosts a number of trees by exploiting similarity information based on locality-sensitive hashing.
[^TPAMI-LAQ]: This paper focuses on communication-efficient federated learning problem, and develops a novel distributed quantized gradient approach, which is characterized by adaptive communications of the quantized gradients. The key idea to save communications from the worker to the server is to quantize gradients as well as skip less informative quantized gradient communications by reusing previous gradients. Quantizing and skipping result in lazy worker-server communications, which justifies the term Lazily Aggregated Quantized (LAQ) gradient. Theoretically, the LAQ algorithm achieves the same linear convergence as the gradient descent in the strongly convex case, while effecting major savings in the communication in terms of transmitted bits and communication rounds . ""-Lazily Aggregate Quantized (LAQ)LAQ
[^FedPop]: A novel methodology coined FedPop by recasting personalised FL into the population modeling paradigm where clients' models involve fixed common population parameters and random individual ones, aiming at explaining data heterogeneity. To derive convergence guarantees for our scheme, we introduce a new class of federated stochastic optimisation algorithms which relies on Markov chain Monte Carlo methods. Compared to existing personalised FL methods, the proposed methodology has important benefits: it is robust to client drift, practical for inference on new clients, and above all, enables uncertainty quantification under mild computational and memory overheads. We provide non-asymptotic convergence guarantees for the proposed algorithms. FedPopFLFL****
[^CoreFed]: We aim to formally represent this problem and address these fairness issues using concepts from co-operative game theory and social choice theory. We model the task of learning a shared predictor in the federated setting as a fair public decision making problem, and then define the notion of core-stable fairness: Given N agents, there is no subset of agents S that can benefit significantly by forming a coalition among themselves based on their utilities UN and US. Core-stable predictors are robust to low quality local data from some agents, and additionally they satisfy Proportionality (each agent gets at least 1/n fraction of the best utility that she can get from any predictor) and Pareto-optimality (there exists no model that can increase the utility of an agent without decreasing the utility of another), two well sought-after fairness and efficiency notions within social choice. We then propose an efficient federated learning protocol CoreFed to optimize a core stable predictor. CoreFed determines a core-stable predictor when the loss functions of the agents are convex. CoreFed also determines approximate core-stable predictors when the loss functions are not convex, like mooth neural networks. We further show the existence of core-stable predictors in more general settings using Kakutani's fixed point theorema. NSUNUSProportionality1/nPareto-optimalityCoreFedCoreFedCoreFedKakutani
[^SecureFedYJ]: The Yeo-Johnson (YJ) transformation is a standard parametrized per-feature unidimensional transformation often used to Gaussianize features in machine learning. In this paper, we investigate the problem of applying the YJ transformation in a cross-silo Federated Learning setting under privacy constraints. For the first time, we prove that the YJ negative log-likelihood is in fact convex, which allows us to optimize it with exponential search. We numerically show that the resulting algorithm is more stable than the state-of-the-art approach based on the Brent minimization method. Building on this simple algorithm and Secure Multiparty Computation routines, we propose SECUREFEDYJ, a federated algorithm that performs a pooled-equivalent YJ transformation without leaking more information than the final fitted parameters do. Quantitative experiments on real data demonstrate that, in addition to being secure, our approach reliably normalizes features across silos as well as if data were pooled, making it a viable approach for safe federated feature Gaussianization. Yeo-JohnsonYJYJYJSECUREFEDYJYJ
[^FedRolex]: A simple yet effective model-heterogeneous FL method named FedRolex to tackle this constraint. Unlike the model-homogeneous scenario, the fundamental challenge of model heterogeneity in FL is that different parameters of the global model are trained on heterogeneous data distributions. FedRolex addresses this challenge by rolling the submodel in each federated iteration so that the parameters of the global model are evenly trained on the global data distribution across all devices, making it more akin to model-homogeneous training. FedRolex-FLFLFedRolex
[^DReS-FL]: The data-owning clients may drop out of the training process arbitrarily. These characteristics will significantly degrade the training performance. This paper proposes a Dropout-Resilient Secure Federated Learning (DReS-FL) framework based on Lagrange coded computing (LCC) to tackle both the non-IID and dropout problems. The key idea is to utilize Lagrange coding to secretly share the private datasets among clients so that the effects of non-IID distribution and client dropouts can be compensated during local gradient computations. To provide a strict privacy guarantee for local datasets and correctly decode the gradient at the server, the gradient has to be a polynomial function in a finite field, and thus we construct polynomial integer neural networks (PINNs) to enable our framework. Theoretical analysis shows that DReS-FL is resilient to client dropouts and provides privacy protection for the local datasets. LCCDReS-FLIIDIIDPINNsDReS-FL
[^FairVFL]: Since in real-world applications the data may contain bias on fairness-sensitive features (e.g., gender), VFL models may inherit bias from training data and become unfair for some user groups. However, existing fair machine learning methods usually rely on the centralized storage of fairness-sensitive features to achieve model fairness, which are usually inapplicable in federated scenarios. In this paper, we propose a fair vertical federated learning framework (FairVFL), which can improve the fairness of VFL models. The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way. Specifically, each platform with fairness-insensitive features first learns local data representations from local features. Then, these local representations are uploaded to a server and aggregated into a unified representation for the target task. In order to learn a fair unified representation, we send it to each platform storing fairness-sensitive features and apply adversarial learning to remove bias from the unified representation inherited from the biased data. Moreover, for protecting user privacy, we further propose a contrastive adversarial learning method to remove private information from the unified representation in server before sending it to the platforms keeping fairness-sensitive features. VFLFairVFLVFLFairVFL
[^VR-ProxSkip]: We study distributed optimization methods based on the local training (LT) paradigm, i.e., methods which achieve communication efficiency by performing richer local gradient-based training on the clients before (expensive) parameter averaging is allowed to take place. While these methods were first proposed about a decade ago, and form the algorithmic backbone of federated learning, there is an enormous gap between their practical performance, and our theoretical understanding. Looking back at the progress of the field, we identify 5 generations of LT methods: 1) heuristic, 2) homogeneous, 3) sublinear, 4) linear, and 5) accelerated. The 5th generation was initiated by the ProxSkip method of Mishchenko et al. (2022), whose analysis provided the first theoretical confirmation that LT is a communication acceleration mechanism. Inspired by this recent progress, we contribute to the 5th generation of LT methods by showing that it is possible to enhance ProxSkip further using variance reduction. While all previous theoretical results for LT methods ignore the cost of local work altogether, and are framed purely in terms of the number of communication rounds, we construct a method that can be substantially faster in terms of the total training time than the state-of-the-art method ProxSkip in theory and practice in the regime when local computation is sufficiently expensive. We characterize this threshold theoretically, and confirm our theoretical predictions with empirical results. Our treatment of variance reduction is generic, and can work with a large number of variance reduction techniques, which may lead to future applications in the future. LT5LT123455Mishchenko2022ProxSkipLT5LTProxSkipLTProxSkip
[^VF-PS]: Vertical Federated Learning (VFL) methods are facing two challenges: (1) scalability when # participants grows to even modest scale and (2) diminishing return w.r.t. # participants: not all participants are equally important and many will not introduce quality improvement in a large consortium. Inspired by these two challenges, in this paper, we ask: How can we select l out of m participants, where lm , that are most important?We call this problem Vertically Federated Participant Selection, and model it with a principled mutual information-based view. Our first technical contribution is VF-MINE---a Vertically Federated Mutual INformation Estimator---that uses one of the most celebrated algorithms in database theory---Fagin's algorithm as a building block. Our second contribution is to further optimize VF-MINE to enable VF-PS, a group testing-based participant selection framework. VFL12mllmVF-MINE------FaginVF-MINEVF-PS
[^DENSE]: A novel two-stage Data-free One-Shot Federated Learning(DENSE) framework, which trains the global model by a data generation stage and a model distillation stage. DENSE is a practical one-shot FL method that can be applied in reality due to the following advantages:(1) DENSE requires no additional information compared with other methods (except the model parameters) to be transferred between clients and the server;(2) DENSE does not require any auxiliary dataset for training;(3) DENSE considers model heterogeneity in FL, i.e. different clients can have different model architectures. DENSEDENSEFL1DENSE2DENSE3DENSEFL
[^CalFAT]: We study the problem of FAT(federated adversarial training) under label skewness, and firstly reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models. We then propose a Calibrated FAT (CalFAT) approach to tackle the instability issue by calibrating the logits adaptively to balance the classes. FATFATCalFAT
[^SAGDA]: Federated min-max learning has received increasing attention in recent years thanks to its wide range of applications in various learning paradigms. We propose a new algorithmic framework called stochastic sampling averaging gradient descent ascent (SAGDA), which i) assembles stochastic gradient estimators from randomly sampled clients as control variates and ii) leverages two learning rates on both server and client sides. We show that SAGDA achieves a linear speedup in terms of both the number of clients and local update steps, which yields an O(2) communication complexity that is orders of magnitude lower than the state of the art. Interestingly, by noting that the standard federated stochastic gradient descent ascent (FSGDA) is in fact a control-variate-free special version of SAGDA, we immediately arrive at an O(2) communication complexity result for FSGDA. Therefore, through the lens of SAGDA, we also advance the current understanding on communication complexity of the standard FSGDA method for federated min-max learning. -SAGDAiiiSAGDAO(-2)FSGDASAGDAFSGDAO(-2)SAGDAFSGDA
[^FAT-Clipping]: A key assumption in most existing works on FL algorithms' convergence analysis is that the noise in stochastic first-order information has a finite variance. Although this assumption covers all light-tailed (i.e., sub-exponential) and some heavy-tailed noise distributions (e.g., log-normal, Weibull, and some Pareto distributions), it fails for many fat-tailed noise distributions (i.e., heavier-tailed'' with potentially infinite variance) that have been empirically observed in the FL literature. To date, it remains unclear whether one can design convergent algorithms for FL systems that experience fat-tailed noise. This motivates us to fill this gap in this paper by proposing an algorithmic framework called FAT-Clipping (federated averaging with two-sided learning rates and clipping), which contains two variants: FAT-Clipping per-round (FAT-Clipping-PR) and FAT-Clipping per-iteration (FAT-Clipping-PI). FLWeibullParetoFL''FLFAT-ClippingFAT-Clipping per-roundFAT-Clipping-PRFAT-Clipping per-iterationFAT-Clipping-PI
[^FedSubAvg]: FedSubAvg, We study federated learning from the new perspective of feature heat, where distinct data features normally involve different numbers of clients, generating the differentiation of hot and cold features. Meanwhile, each clients local data tend to interact with part of features, updating only the feature-related part of the full model, called a submodel. We further identify that the classical federated averaging algorithm (FedAvg) or its variants, which randomly selects clients to participate and uniformly averages their submodel updates, will be severely slowed down, because different parameters of the global model are optimized at different speeds. More specifically, the model parameters related to hot (resp., cold) features will be updated quickly (resp., slowly). We thus propose federated submodel averaging (FedSubAvg), which introduces the number of feature-related clients as the metric of feature heat to correct the aggregation of submodel updates. We prove that due to the dispersion of feature heat, the global objective is ill-conditioned, and FedSubAvg works as a suitable diagonal preconditioner. We also rigorously analyze FedSubAvgs convergence rate to stationary points. FedAvgFedSubAvgFedSubAvgFedSubAvg
[^BooNTK]: BooNTK, State-of-the-art federated learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions. We show that this performance disparity can largely be attributed to optimization challenges presented by nonconvexity. Specifically, we find that the early layers of the network do learn useful features, but the final layers fail to make use of them. That is, federated optimization applied to this non-convex problem distorts the learning of the final layers. Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e.g., FedAvg); then, optimize a convexified problem obtained from the network's empirical neural tangent kernel approximation. Train-Convexify-TrainTCTFedAvg
[^SoteriaFL]: SoteriaFL, A unified framework that enhances the communication efficiency of private federated learning with communication compression. Exploiting both general compression operators and local differential privacy, we first examine a simple algorithm that applies compression directly to differentially-private stochastic gradient descent, and identify its limitations. We then propose a unified framework SoteriaFL for private federated learning, which accommodates a general family of local gradient estimators including popular stochastic variance-reduced gradient methods and the state-of-the-art shifted compression scheme. SoteriaFL
[^FILM]: FILM, A novel attack method FILM (Federated Inversion attack for Language Models) for federated learning of language models---for the first time, we show the feasibility of recovering text from large batch sizes of up to 128 sentences. Different from image-recovery methods which are optimized to match gradients, we take a distinct approach that first identifies a set of words from gradients and then directly reconstructs sentences based on beam search and a prior-based reordering strategy. The key insight of our attack is to leverage either prior knowledge in pre-trained language models or memorization during training. Despite its simplicity, we demonstrate that FILM can work well with several large-scale datasets---it can extract single sentences with high fidelity even for large batch sizes and recover multiple sentences from the batch successfully if the attack is applied iteratively. FILM () - -128FILM- -
[^FedPCL]: FedPCL, A lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models rather than training a large-scale model from scratch. This leads us to a more practical FL problem by considering how to capture more client-specific and class-relevant information from the pre-trained models and jointly improve each client's ability to exploit those off-the-shelf models. Here, we design a Federated Prototype-wise Contrastive Learning (FedPCL) approach which shares knowledge across clients through their class prototypes and builds client-specific representations in a prototype-wise contrastive manner. Sharing prototypes rather than learnable model parameters allows each client to fuse the representations in a personalized way while keeping the shared knowledge in a compact form for efficient communication. FLFedPCL
[^FLANC]: To achieve resource-adaptive federated learning, we introduce a simple yet effective mechanism, termed All-In-One Neural Composition, to systematically support training complexity-adjustable models with flexible resource adaption. It is able to efficiently construct models at various complexities using one unified neural basis shared among clients, instead of pruning the global model into local ones. The proposed mechanism endows the system with unhindered access to the full range of knowledge scattered across clients and generalizes existing pruning-based solutions by allowing soft and learnable extraction of low footprint models. ""
[^Self-FL]: Inspired by Bayesian hierarchical models, we develop a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients' training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. A larger inter-client variation implies more personalization is needed. Correspondingly, our method uses uncertainty-driven local training steps an aggregation rule instead of conventional local fine-tuning and sample size-based aggregation. FL
[^FedGDA-GT]: In this paper, we study a large-scale multi-agent minimax optimization problem, which models many interesting applications in statistical learning and game theory, including Generative Adversarial Networks (GANs). The overall objective is a sum of agents' private local objective functions. We first analyze an important special case, empirical minimax problem, where the overall objective approximates a true population minimax risk by statistical samples. We provide generalization bounds for learning with this objective through Rademacher complexity analysis. Then, we focus on the federated setting, where agents can perform local computation and communicate with a central server. Most existing federated minimax algorithms either require communication per iteration or lack performance guarantees with the exception of Local Stochastic Gradient Descent Ascent (SGDA), a multiple-local-update descent ascent algorithm which guarantees convergence under a diminishing stepsize. By analyzing Local SGDA under the ideal condition of no gradient noise, we show that generally it cannot guarantee exact convergence with constant stepsizes and thus suffers from slow rates of convergence. To tackle this issue, we propose FedGDA-GT, an improved Federated (Fed) Gradient Descent Ascent (GDA) method based on Gradient Tracking (GT). When local objectives are Lipschitz smooth and strongly-convex-strongly-concave, we prove that FedGDA-GT converges linearly with a constant stepsize to global -approximation solution with O(log(1/)) rounds of communication, which matches the time complexity of centralized GDA method. Finally, we numerically show that FedGDA-GT outperforms Local SGDA. GANsRademacherSGDALocal SGDAFedGDA-GTGTFedGDALipschitz-FedGDA-GTO(log(1/)) GDAFedGDA-GTLocal SGDA
[^SemiFL]: SemiFL to address the problem of combining communication efficient FL like FedAvg with Semi-Supervised Learning (SSL). In SemiFL, clients have completely unlabeled data and can train multiple local epochs to reduce communication costs, while the server has a small amount of labeled data. We provide a theoretical understanding of the success of data augmentation-based SSL methods to illustrate the bottleneck of a vanilla combination of communication efficient FL with SSL. To address this issue, we propose alternate training to 'fine-tune global model with labeled data' and 'generate pseudo-labels with global model.' SemiFLFedAvgFLSSLSemiFLSSLFLSSL " " " "
[^FedNTD]: This study starts from an analogy to continual learning and suggests that forgetting could be the bottleneck of federated learning. We observe that the global model forgets the knowledge from previous rounds, and the local training induces forgetting the knowledge outside of the local distribution. Based on our findings, we hypothesize that tackling down forgetting will relieve the data heterogeneity problem. To this end, we propose a novel and effective algorithm, Federated Not-True Distillation (FedNTD), which preserves the global perspective on locally available data only for the not-true classes. - -( FedNTD )
[^FedSR]: We propose a simple yet novel representation learning framework, namely FedSR, which enables domain generalization while still respecting the decentralized and privacy-preserving natures of this FL setting. Motivated by classical machine learning algorithms, we aim to learn a simple representation of the data for better generalization. In particular, we enforce an L2-norm regularizer on the representation and a conditional mutual information (between the representation and the data given the label) regularizer to encourage the model to only learn essential information (while ignoring spurious correlations such as the background). Furthermore, we provide theoretical connections between the above two objectives and representation alignment in domain generalization. FedSRFLL2()(,)
[^Factorized-FL]: In real-world federated learning scenarios, participants could have their own personalized labels which are incompatible with those from other clients, due to using different label permutations or tackling completely different tasks or domains. However, most existing FL approaches cannot effectively tackle such extremely heterogeneous scenarios since they often assume that (1) all participants use a synchronized set of labels, and (2) they train on the same tasks from the same domain. In this work, to tackle these challenges, we introduce Factorized-FL, which allows to effectively tackle label- and task-heterogeneous federated learning settings by factorizing the model parameters into a pair of rank-1 vectors, where one captures the common knowledge across different labels and tasks and the other captures knowledge specific to the task for each local model. Moreover, based on the distance in the client-specific vector space, Factorized-FL performs selective aggregation scheme to utilize only the knowledge from the relevant participants for each client. FL12Factorized-FL1Factorized-FL
[^FedLinUCB]: We study federated contextual linear bandits, where M agents cooperate with each other to solve a global contextual linear bandit problem with the help of a central server. We consider the asynchronous setting, where all agents work independently and the communication between one agent and the server will not trigger other agents' communication. We propose a simple algorithm named FedLinUCB based on the principle of optimism. We prove that the regret of FedLinUCB is bounded by O(dMm=1Tm) and the communication complexity is O(dM2), where d is the dimension of the contextual vector and Tm is the total number of interactions with the environment by agent m. To the best of our knowledge, this is the first provably efficient algorithm that allows fully asynchronous communication for federated linear bandits, while achieving the same regret guarantee as in the single-agent setting.MFedLinUCBFedLinUCBO(dMm=1Tm)O(dM2)dTmm
[^FedSim]: Vertical federated learning (VFL), where parties share the same set of samples but only hold partial features, has a wide range of real-world applications. However, most existing studies in VFL disregard the record linkage process. They design algorithms either assuming the data from different parties can be exactly linked or simply linking each record with its most similar neighboring record. These approaches may fail to capture the key features from other less similar records. Moreover, such improper linkage cannot be corrected by training since existing approaches provide no feedback on linkage during training. In this paper, we design a novel coupled training paradigm, FedSim, that integrates one-to-many linkage into the training process. Besides enabling VFL in many real-world applications with fuzzy identifiers, FedSim also achieves better performance in traditional VFL tasks. Moreover, we theoretically analyze the additional privacy risk incurred by sharing similarities. (VFL)VFLFedSimVFLFedSimVFL
[^PPSGD]: TBC
[^PBM]: TBC
[^DisPFL]: TBC
[^FedNew]: TBC
[^DAdaQuant]: TBC
[^FedMLB]: TBC
[^FedScale]: FedScale, a federated learning (FL) benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research. FedScaleFLFL
[^FedPU]: TBC
[^Orchestra]: TBC
[^DFL]: TBC
[^FedHeNN]: TBC
[^KNN-PER]: TBC
[^ProxRR]: TBC
[^FedNL]: TBC
[^VFL]: TBC
[^FedNest]: TBC
[^EDEN]: TBC
[^ProgFed]: TBC
[^breaching]: TBC
[^QSFL]: TBC
[^Neurotoxin]: TBC
[^FedUL]: TBC
[^FedChain]: TBC
[^FedReg]: TBC
[^Fed-RoD]: TBC
[^HeteroFL]: TBC
[^FedMix]: TBC
[^FedFomo]: TBC
[^FedBN]: TBC
[^FedBE]: TBC
[^FL-NTK]: TBC
[^Sageflow]: TBC
[^CAFE]: TBC
[^QuPeD]: TBC
[^FedSage]: In this work, towards the novel yet realistic setting of subgraph federated learning, we propose two major techniques: (1) FedSage, which trains a GraphSage model based on FedAvg to integrate node features, link structures, and task labels on multiple local subgraphs; (2) FedSage+, which trains a missing neighbor generator along FedSage to deal with missing links across local subgraphs. (1) FedSageFedAvgGraphSage(2) FedSage +FedSage
[^GradAttack]: TBC
[^KT-pFL]: We exploit the potentials of heterogeneous model settings and propose a novel training framework to employ personalized models for different clients. Specifically, we formulate the aggregation procedure in original pFL into a personalized group knowledge transfer training algorithm, namely, KT-pFL, which enables each client to maintain a personalized soft prediction at the server side to guide the others' local training. KT-pFL updates the personalized soft prediction of each client by a linear combination of all local soft predictions using a knowledge coefficient matrix, which can adaptively reinforce the collaboration among clients who own similar data distribution. Furthermore, to quantify the contributions of each client to others' personalized training, the knowledge coefficient matrix is parameterized so that it can be trained simultaneously with the models. The knowledge coefficient matrix and the model parameters are alternatively updated in each round following the gradient descent way. pFLKT-pFLKT-pFL
[^FL-WBC]: TBC
[^FjORD]: TBC
[^GCFL]: Graphs can also be regarded as a special type of data samples. We analyze real-world graphs from different domains to confirm that they indeed share certain graph properties that are statistically significant compared with random graphs. However, we also find that different sets of graphs, even from the same domain or same dataset, are non-IID regarding both graph structures and node features. A graph clustered federated learning (GCFL) framework that dynamically finds clusters of local systems based on the gradients of GNNs, and theoretically justify that such clusters can reduce the structure and feature heterogeneity among graphs owned by the local systems. Moreover, we observe the gradients of GNNs to be rather fluctuating in GCFL which impedes high-quality clustering, and design a gradient sequence-based clustering mechanism based on dynamic time warping (GCFL+). IID(GCFL)GNNsGNNsGCFL(GCFL+)
[^FedEx]: TBC
[^Large-Cohort]: TBC
[^DeepReduce]: TBC
[^PartialFed]: TBC
[^FCFL]: TBC
[^Federated-EM]: TBC
[^FedDR]: TBC
[^fair-flearn]: TBC
[^FedMA]: TBC
[^FedBoost]: TBC
[^FetchSGD]: TBC
[^SCAFFOLD]: TBC
[^FedSplit]: TBC
[^fbo]: TBC
[^RobustFL]: TBC
[^ifca]: TBC
[^DRFA]: TBC
[^Per-FedAvg]: TBC
[^FedGKT]: TBC
[^FedNova]: TBC
[^FedAc]: TBC
[^FedDF]: TBC
[^CE]: CE propose the concept of benefit graph which describes how each client can benefit from collaborating with other clients and advance a Pareto optimization approach to identify the optimal collaborators. CE
[^SuPerFed]: SuPerFed, a personalized federated learning method that induces an explicit connection between the optima of the local and the federated model in weight space for boosting each other. SuPerFed
[^FedMSplit]: FedMSplit framework, which allows federated training over multimodal distributed data without assuming similar active sensors in all clients. The key idea is to employ a dynamic and multi-view graph structure to adaptively capture the correlations amongst multimodal client models. FedMSplit
[^Comm-FedBiO]: Comm-FedBiO propose a learning-based reweighting approach to mitigate the effect of noisy labels in FL. Comm-FedBiOFL
[^FLDetector]: FLDetector detects malicious clients via checking their model-updates consistency to defend against model poisoning attacks with a large number of malicious clients. FLDetector
[^FedSVD]: FedSVD, a practical lossless federated SVD method over billion-scale data, which can simultaneously achieve lossless accuracy and high efficiency. FedSVDSVD
[^FedWalk]: FedWalk, a random-walk-based unsupervised node embedding algorithm that operates in such a node-level visibility graph with raw graph information remaining locally. FedWalk
[^FederatedScope-GNN]: FederatedScope-GNN present an easy-to-use FGL (federated graph learning) package. FederatedScope-GNNFGL
[^Fed-LTD]: Federated Learning-to-Dispatch (Fed-LTD), a framework that allows effective order dispatching by sharing both dispatching models and decisions while providing privacy protection of raw data and high efficiency.
[^Felicitas]: Felicitas is a distributed cross-device Federated Learning (FL) framework to solve the industrial difficulties of FL in large-scale device deployment scenarios. FelicitasFLFL
[^InclusiveFL]: InclusiveFL is to assign models of different sizes to clients with different computing capabilities, bigger models for powerful clients and smaller ones for weak clients. InclusiveFL
[^FedAttack]: FedAttack a simple yet effective and covert poisoning attack method on federated recommendation, core idea is using globally hardest samples to subvert model training. FedAttack
[^PipAttack]: PipAttack present a systematic approach to backdooring federated recommender systems for targeted item promotion. The core tactic is to take advantage of the inherent popularity bias that commonly exists in data-driven recommenders. PipAttack
[^Fed2]: Fed2, a feature-aligned federated learning framework to resolve this issue by establishing a firm structure-feature alignment across the collaborative models. Fed2-
[^FedRS]: FedRS focus on a special kind of non-iid scene, i.e., label distribution skew, where each client can only access a partial set of the whole class set. Considering top layers of neural networks are more task-specific, we advocate that the last classification layer is more vulnerable to the shift of label distribution. Hence, we in-depth study the classifier layer and point out that the standard softmax will encounter several problems caused by missing classes. As an alternative, we propose Restricted Softmax" to limit the update of missing classes weights during the local procedure. FedRSiidsoftmax "Softmax"
[^FADE]: While adversarial learning is commonly used in centralized learning for mitigating bias, there are significant barriers when extending it to the federated framework. In this work, we study these barriers and address them by proposing a novel approach Federated Adversarial DEbiasing (FADE). FADE does not require users' sensitive group information for debiasing and offers users the freedom to opt-out from the adversarial component when privacy or computational costs become a concern. Federated Adversarial DEbiasingFADEFADE
[^CNFGNN]: Cross-Node Federated Graph Neural Network (CNFGNN) , a federated spatio-temporal model, which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. CNFGNNGNNCNFGNN
[^AsySQN]: To address the challenges of communication and computation resource utilization, we propose an asynchronous stochastic quasi-Newton (AsySQN) framework for Vertical federated learning(VFL), under which three algorithms, i.e. AsySQN-SGD, -SVRG and -SAGA, are proposed. The proposed AsySQN-type algorithms making descent steps scaled by approximate (without calculating the inverse Hessian matrix explicitly) Hessian information convergence much faster than SGD-based methods in practice and thus can dramatically reduce the number of communication rounds. Moreover, the adopted asynchronous computation can make better use of the computation resource. We theoretically prove the convergence rates of our proposed algorithms for strongly convex problems. AsySQNVFLAsySQN-SGD-SVRG-SAGAAsySQNHessianHessianSGD
[^FLOP]: A simple yet effective algorithm, named Federated Learning on Medical Datasets using Partial Networks (FLOP), that shares only a partial model between the server and clients. FLOP
[^Federated-Learning-source]: This paper have built a framework that enables Federated Learning (FL) for a small number of stakeholders. and described the framework architecture, communication protocol, and algorithms. FL
[^FDKT]: A novel Federated Deep Knowledge Tracing (FDKT) framework to collectively train high-quality Deep Knowledge Tracing (DKT) models for multiple silos. FDKTDKT
[^FedFast]: FedFast accelerates distributed learning which achieves good accuracy for all users very early in the training process. We achieve this by sampling from a diverse set of participating clients in each training round and applying an active aggregation method that propagates the updated model to the other clients. Consequently, with FedFast the users benefit from far lower communication costs and more accurate models that can be consumed anytime during the training process even at the very early stages. FedFastFedFast
[^FDSKL]: FDSKL, a federated doubly stochastic kernel learning algorithm for vertically partitioned data. Specifically, we use random features to approximate the kernel mapping function and use doubly stochastic gradients to update the solutions, which are all computed federatedly without the disclosure of data. FDSKL
[^FOLtR-ES]: Federated Online Learning to Rank setup (FOLtR) where on-mobile ranking models are trained in a way that respects the users' privacy. FOLtR-ES that satisfies these requirement: (a) preserving the user privacy, (b) low communication and computation costs, (c) learning from noisy bandit feedback, and (d) learning with non-continuous ranking quality measures. A part of FOLtR-ES is a privatization procedure that allows it to provide -local differential privacy guarantees, i.e. protecting the clients from an adversary who has access to the communicated messages. This procedure can be applied to any absolute online metric that takes finitely many values or can be discretized to a finite domain. FOLtRFOLtR-ES(a)(b)(c)(d)FOLtR-ES-local
[^FedRecover]: Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model via sending malicious model updates to the server. Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them. However, it is still an open challenge how to recover the global model from poisoning attacks after the malicious clients are detected. A naive solution is to remove the detected malicious clients and train a new global model from scratch using the remaining clients. However, such train-from-scratch recovery method incurs a large computation and communication cost, which may be intolerable for resource-constrained clients such as smartphones and IoT devices. In this work, we propose FedRecover, a method that can recover an accurate global model from poisoning attacks with a small computation and communication cost for the clients. Our key idea is that the server estimates the clients model updates instead of asking the clients to compute and communicate them during the recovery process. In particular, the server stores the historical information, including the global models and clients model updates in each round, when training the poisoned global model before the malicious clients are detected. During the recovery process, the server estimates a clients model update in each round using its stored historical information. Moreover, we further optimize FedRecover to recover a more accurate global model using warm-up, periodic correction, abnormality fixing, and final tuning strategies, in which the server asks the clients to compute and communicate their exact model updates. Theoretically, we show that the global model recovered by FedRecover is close to or the same as that recovered by train-from-scratch under some assumptions. FedRecoverFedRecoverFedRecover
[^PEA]: We are motivated to resolve the above issue by proposing a solution, referred to as PEA (Private, Efficient, Accurate), which consists of a secure differentially private stochastic gradient descent (DPSGD for short) protocol and two optimization methods. First, we propose a secure DPSGD protocol to enforce DPSGD, which is a popular differentially private machine learning algorithm, in secret sharing-based MPL frameworks. Second, to reduce the accuracy loss led by differential privacy noise and the huge communication overhead of MPL, we propose two optimization methods for the training process of MPL. MPL
[^SIMC]: TBC
[^FLAME]: TBC
[^FedCRI]: TBC
[^DeepSight]: TBC
[^FSMAFL]: This paper studies a new challenging problem, namely few-shot model agnostic federated learning, where the local participants design their independent models from their limited private datasets. Considering the scarcity of the private data, we propose to utilize the abundant public available datasets for bridging the gap between local private participants. However, its usage also brings in two problems: inconsistent labels and large domain gap between the public and private datasets. To address these issues, this paper presents a novel framework with two main parts: 1) model agnostic federated learning, it performs public-private communication by unifying the model prediction outputs on the shared public datasets; 2) latent embedding adaptation, it addresses the domain gap with an adversarial learning scheme to discriminate the public and private domains. 12
[^EmoFed]: TBC
[^FedSAM]: Models trained in federated settings often suffer from degraded performances and fail at generalizing, especially when facing heterogeneous scenarios. FedSAM investigate such behavior through the lens of geometry of the loss and Hessian eigenspectrum, linking the model's lack of generalization capacity to the sharpness of the solution. FedSAM Hessian
[^FedX]: TBC
[^LC-Fed]: LC-Fed propose a personalized federated framework with Local Calibration, to leverage the inter-site in-consistencies in both feature- and prediction- levels to boost the segmentation. LC-Fed
[^ATPFL]: ATPFL helps users federate multi-source trajectory datasets to automatically design and train a powerful TP model. ATPFLTP
[^ViT-FL]: ViT-FL demonstrate that self-attention-based architectures (e.g., Transformers) are more robust to distribution shifts and hence improve federated learning over heterogeneous data. ViT-FL Transformers
[^FedCorr]: FedCorr, a general multi-stage framework to tackle heterogeneous label noise in FL, without making any assumptions on the noise models of local clients, while still maintaining client data privacy. FedCorr FL
[^FedCor]: FedCor, an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL. FedCor FLFL
[^pFedLA]: A novel pFL training framework dubbed Layer-wised Personalized Federated learning (pFedLA) that can discern the importance of each layer from different clients, and thus is able to optimize the personalized model aggregation for clients with heterogeneous data. ""pFedLA
[^FedAlign]: FedAlign rethinks solutions to data heterogeneity in FL with a focus on local learning generality rather than proximal restriction. FL(generality)
[^PANs]: Position-Aware Neurons (PANs) , fusing position-related values (i.e., position encodings) into neuron outputs, making parameters across clients pre-aligned and facilitating coordinate-based parameter averaging. PANs
[^RSCFed]: Federated semi-supervised learning (FSSL) aims to derive a global model by training fully-labeled and fully-unlabeled clients or training partially labeled clients. RSCFed presents a Random Sampling Consensus Federated learning, by considering the uneven reliability among models from fully-labeled clients, fully-unlabeled clients or partially labeled clients. FSSL RSCFed
[^FCCL]: FCCL (Federated Cross-Correlation and Continual Learning) For heterogeneity problem, FCCL leverages unlabeled public data for communication and construct cross-correlation matrix to learn a generalizable representation under domain shift. Meanwhile, for catastrophic forgetting, FCCL utilizes knowledge distillation in local updating, providing inter and intra domain information without leaking privacy. FCCLFCCLFCCL
[^RHFL]: RHFL (Robust Heterogeneous Federated Learning) simultaneously handles the label noise and performs federated learning in a single framework. RHFL
[^ResSFL]: ResSFL, a Split Federated Learning Framework that is designed to be MI-resistant during training. ResSFLMI Model Inversion (MI) attack
[^FedDC]: FedDC propose a novel federated learning algorithm with local drift decoupling and correction. FedDC
[^GLFC]: Global-Local Forgetting Compensation (GLFC) model, to learn a global class incremental model for alleviating the catastrophic forgetting from both local and global perspectives. -GLFC
[^FedFTG]: FedFTG, a data-free knowledge distillation method to fine-tune the global model in the server, which relieves the issue of direct model aggregation. FedFTG,
[^DP-FedAvgplusBLURplusLUS]: DP-FedAvg+BLUR+LUS study the cause of model performance degradation in federated learning under user-level DP guarantee and propose two techniques, Bounded Local Update Regularization and Local Update Sparsification, to increase model quality without sacrificing privacy. DP-FedAvg+BLUR+LUS DP,
[^GGL]: Generative Gradient Leakage (GGL) validate that the private training data can still be leaked under certain defense settings with a new type of leakage. GGL
[^CD2-pFed]: CD2-pFed, a novel Cyclic Distillation-guided Channel Decoupling framework, to personalize the global model in FL, under various settings of data heterogeneity. CD2-pFedFL
[^FedSM]: FedSM propose a novel training framework to avoid the client drift issue and successfully close the generalization gap compared with the centralized training for medical image segmentation tasks for the first time. FedSM
[^FL-MRCM]: FL-MRCM propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy. FL-MRCM FLMR
[^MOON]: MOON
[^FedDG-ELCFS]: FedDG-ELCFS A novel problem setting of federated domain generalization (FedDG), which aims to learn a federated model from multiple distributed source domains such that it can directly generalize to unseen target domains. Episodic Learning in Continuous Frequency Space (ELCFS), for this problem by enabling each client to exploit multi-source data distributions under the challenging constraint of data decentralization. FedDG-ELCFS FedDGELCFS
[^Soteria]: Soteria propose a defense against model inversion attack in FL, learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained. Soteria FL,FL
[^FedUFO]: FedUFO a Unified Feature learning and Optimization objectives alignment method for non-IID FL. FedUFO non IID FL
[^FedAD]: FedAD propose a new distillation-based FL frame-work that can preserve privacy by design, while also consuming substantially less network communication resources when compared to the current methods. FedAD FL
[^FedU]: FedU a novel federated unsupervised learning framework. FedU .
[^FedUReID]: FedUReID, a federated unsupervised person ReID system to learn person ReID models without any labels while preserving privacy. FedUReID
[^FedVCplusFedIR]: Introduce two new large-scale datasets for species and landmark classification, with realistic per-user data splits that simulate real-world edge learning scenarios. We also develop two new algorithms (FedVC, FedIR) that intelligently resample and reweight over the client pool, bringing large improvements in accuracy and stability in training. FedVCFedIR
[^InvisibleFL]: InvisibleFL propose a privacy-preserving solution that avoids multimedia privacy leakages in federated learning. InvisibleFL
[^FedReID]: FedReID implement federated learning to person re-identification and optimize its performance affected by statistical heterogeneity in the real-world scenario. FedReID
[^FedR]: In this paper, we first develop a novel attack that aims to recover the original data based on embedding information, which is further used to evaluate the vulnerabilities of FedE. Furthermore, we propose a Federated learning paradigm with privacy-preserving Relation embedding aggregation (FedR) to tackle the privacy issue in FedE. Compared to entity embedding sharing, relation embedding sharing policy can significantly reduce the communication cost due to its smaller size of queries. FedEFedRFedE
[^SLM-FL]: Due to the server-client communication and on-device computation bottlenecks, this paper explores whether the big language model can be achieved using cross-device federated learning. First, they investigate quantization and partial model training to address the per round communication and computation cost. Then, they study fast convergence techniques by reducing the number of communication rounds, using transfer learning and centralized pretraining methods. They demonstrated that these techniques, individually or in combination, can scale to larger models in cross-device federated learning. 21MTransformer, 20.2MConformerCentralized pretraining
[^IGC-FL]: Communication cost is the largest barrier to the wider adoption of federated learning. This paper addresses this issue by investigating a family of new gradient compression strategies, including static compression, time-varying compression and K-subspace compression. They call it intrinsic gradient compression algorithms. These three gradient compression algorithms can be applied to different levels of bandwidth scenarios and can be used in combination in special scenarios.Moreover, they provide theoretical guarantees on the performance. They train big models with 100M parameters compared to current state-of-the-art gradient compression methods (e.g. FetchSGD). static compression, time-varying compression and K-subspace compressionintrinstic gradient compression algorighms. 100MFetchSGDSOTA.
[^ActPerFL]: Inspired by Bayesian hierarchical models, this paper investigates how to achieve better personalized federated learning by balancing local model improvement and global model tuning. They develop Act-PerFL, a self-aware personalized FL method where leveraging local training and global aggregation via inter- and intra-client uncertainty quantification. Specifically, ActPerFL adaptively adjusts local training steps with automated hyper-parameter selection and performs uncertainty-weighted global aggregation (Non-sample size based weighted average) . ActPerFLActPerFL
[^FedNLP]: This paper present a benchmarking framework for evaluating federated learning methods on four common formulations of NLP tasks: text classification, sequence tagging, question answering, and seq2seq generation. NLPFedAvgFedProxFedOPTNLPseq2seq
[^FedNoisy]: In realistic human-computer interaction, there are usually many noisy user feedback signals. This paper investigates whether federated learning can be trained directly based on positive and negative user feedback. They show that, under mild to moderate noise conditions, incorporating feedback improves model performance over self-supervised baselines.They also study different levels of noise hoping to mitigate the impact of user feedback noise on model performance.
[^FedMDT]: Due to the real-world limitations of centralized training, when training mixed-domain translation models with federated learning, this paper finds that the global aggregation strategy of federated learning can effectively aggregate information from different domains, so that NMT (neural machine translation) can benefit from federated learning. At the same time, they propose a novel and practical solution to reduce the communication bandwidth. Specifically, they design Dynamic Pulling, which pulls only one type of high volatility tensor in each round of communication. mixed-domain translation modelsNMTneural machine translation Dynamic Pulling,
[^Efficient-FedRec]: TBC
[^noniid-foltr]: In this perspective paper we study the effect of non independent and identically distributed (non-IID) data on federated online learning to rank (FOLTR) and chart directions for future work in this new and largely unexplored research area of Information Retrieval. IIDFOLTR
[^FedCT]: The cross-domain recommendation problem is formalized under a decentralized computing environment with multiple domain servers. And we identify two key challenges for this setting: the unavailability of direct transfer and the heterogeneity of the domain-specific user representations. We then propose to learn and maintain a decentralized user encoding on each user's personal space. The optimization follows a variational inference framework that maximizes the mutual information between the user's encoding and the domain-specific user information from all her interacted domains.
[^FedGWAS]: Under some circumstances, the private data can be reconstructed from the model parameters, which implies that data leakage can occur in FL.In this paper, we draw attention to another risk associated with FL: Even if federated algorithms are individually privacy-preserving, combining them into pipelines is not necessari