Skip to content

yasar-rehman/FEDVSSL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

80 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FEDVSSL

This is a general purpose repository for Federated Self-Superivised Learning for video understanding build on top of MMCV and Flower.

Authors

Note:

As of December 2023, FedVSSL is now part of the Flower baselines.

Dataset

For both centralized and federated video SSL pretraining, we use Kinetics-400. We evaluate the quality of learned representations by applying them on two downstream datasets: UCF-101 and HMDB-51.

As a part of reproducibility, we have provideded the dataset partitions of the Kinetics-400 dataset for federated learning in the Data folder with iid and non-iid data distribution.

One can generate the non-iid version of kinetics-400 with 100 clients (8 classes per client) by running:
python scripts/k400_non_iid.py

The iid version of kinetics-400 with 100 clients (8 classes per client) can be generated by running:
python scripts/kinetics_json_splitter.py

Caution:

Note that the above two python files assume that you have already downloaded the official trainlist of kineticss-400.

FL pretrained Models

We provide a series of federated-SSL pretrined models of VCOP, Speed, and CtP. All these models are federated pretrained on the non-iid version of Kinetics-400 (8 classes/client) see Table 1 in the manuscript. The annotations can be found in the Data/Kinetics-400_annotations/ in this repository.

Methods FL Pretrained Models
VCOP VCOP5c1e540r
Speed Speed5c1e540r
CtP Ctp5c1e540r

News

  • FedVSSL has been added to the Flower baselines.
  • Check out the teaser of our work on the YouTube.
  • The preprint of our paper is now available on arXiv

Dependencies

For a complete list of the required packages please see the requirement.txt file. One can easily install, all the requirement by running pip install -r requirement.txt.

Instructions

We recommend installing Microsoft CtP Framework as it contain all the Self-supervised learning frameworks build on top of MMCV framework. Here we provided a modifed version of that framework for FedVSSL, in particular.

Running Experiments

The abstract definition of classes are provided by reproduce_papers/fedssl/videossl.py.

Method Python code Description
FedAvg main.py Federate the SSL method using the conventional FedAvg method
FedVSSL $(\alpha=0, \beta=0)$ main_cam_st_theta_b_wo_moment.py Implementation of FedAvg but with only aggregating the backbone network
FedVSSL $(\alpha=1, \beta=0)$ main_cam_st_theta_b_loss_wo_moment.py Implementation of loss-based aggregation but with only aggregating the backbone network
FedVSSL $(\alpha=0, \beta=1)$ main_cam_st_theta_b_FedAvg_+SWA_wo_moment.py Implementation of FedAvg+SWA aggregation but with only aggregating the backbone network
FedVSSL $(\alpha=1, \beta=1)$ main_cam_st_theta_b_loss_+SWA_wo_moment.py Implementation of loss-based+SWA aggregation but with only aggregating the backbone network
FedVSSL $(\alpha=0.9, \beta=0)$ main_cam_st_theta_b_mixed_wo_mement.py Implementation of FedAvg+loss-based aggregation but with only aggregating the backbone network
FedVSSL $(\alpha=0.9, \beta=1)$ main_cam_st_theta_b_mixed_+SWA_wo_mement.py Implementation of FedAvg+loss-based+SWA aggregation but with only aggregating the backbone network

Evaluation

After FL pretraining one can use the following code to fine-tune the model on UCF or HMDB.

import subprocess
import os
import CtP 
process_obj = subprocess.run(["bash", "CtP/tools/dist_train.sh",\
"CtP/configs/ctp/r3d_18_kinetics/finetune_ucf101.py", "4",\
f"--work_dir /finetune/ucf101/",
f"--data_dir /DATA",\
f"--pretrained /path to the pretrained checkpoint",\
f"--validate"])

Expected Results

For the detailed results regarding the below checkpoints, please see Table 4 in the manuscript.

Method Checkpoint file UCF R@1 HMDB R@1
FedVSSL$(\alpha=0, \beta=0)$ round-540.npz 34.34 15.82
FedVSSL$(\alpha=1, \beta=0)$ round-540.npz 34.23 16.73
FedVSSL$(\alpha=0, \beta=1)$ round540.npz 35.61 16.93
FedVSSL$(\alpha=1, \beta=1)$ round540.npz 35.66 16.41
FedVSSL$(\alpha=0.9, \beta=0)$ round-540.npz 35.50 16.27
FedVSSL$(\alpha=0.9, \beta=1)$ round-540.npz 35.34 16.93

Issues:

If you encounter any issues, feel free to open an issue in the github

Citations

@article{rehman2022federated,
  title={Federated Self-supervised Learning for Video Understanding},
  author={Rehman, Yasar Abbas Ur and Gao, Yan and Shen, Jiajun and de Gusmao, Pedro Porto Buarque and Lane, Nicholas},
  journal={arXiv preprint arXiv:2207.01975},
  year={2022}
}

Acknowledgement

We would like to thank Daniel J. Beutel for providing the initial blueprint of Federated self-supervised learning with flower. Also thanks to Akhil Mathur for providing the useful suggestions.

About

This is the official impelementation of "FVSSL Algorithm"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages