Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for pytorch distributed computing
distributed-computing
x
pytorch
x
26 search results found
Catalyst
⭐
3,151
Accelerated deep learning R&D
Rl
⭐
1,658
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
Pfllib
⭐
1,176
We expose this user-friendly algorithm library (with an integrated evaluation platform) for beginners who intend to start federated learning (FL) study
Federated Learning Pytorch
⭐
902
Implementation of Communication-Efficient Learning of Deep Networks from Decentralized Data
Bagua
⭐
842
Bagua Speeds up PyTorch
Easyfl
⭐
395
An experimental platform to quickly realize and compare with popular centralized federated learning algorithms. A realization of federated learning algorithm on fairness (FedFV, Federated Learning with Fair Averaging, https://fanxlxmu.github.io/publication/ijcai2021/) was accepted by IJCAI-21 (https://www.ijcai.org/proceedings/2021/223).
Mpi Operator
⭐
373
Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)
Persia
⭐
369
High performance distributed framework for training deep learning recommendation models based on PyTorch.
Sparktorch
⭐
297
Train and run Pytorch models on Apache Spark.
Bluefog
⭐
282
Distributed and decentralized training framework for PyTorch over graph
Torch Quiver
⭐
256
PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.
Mlcomp
⭐
187
Distributed DAG (Directed acyclic graph) framework for machine learning with UI
Replay
⭐
109
A Comprehensive Framework for Building End-to-End Recommendation Systems with State-of-the-Art Models
Integrated Design Diffusion Model
⭐
50
IDDM (Industrial, landscape, animate...), support DDPM, DDIM, webui and multi-GPU distributed training. Pytorch实现,生成模型,扩散模型,分布式训练
Dask Pytorch Ddp
⭐
28
dask-pytorch-ddp is a Python package that makes it easy to train PyTorch models on dask clusters using distributed data parallel.
Flip
⭐
24
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning [ICLR‘23, Best Paper Award at ECCV’22 AROW Workshop]
Vodascheduler
⭐
23
GPU scheduler for elastic/distributed deep learning workloads in Kubernetes cluster
Hydra
⭐
19
Execution framework for multi-task model parallelism. Enables the training of arbitrarily large models with a single GPU, with linear speedups for multi-gpu multi-task execution.
Federated Learning And Split Learning With Raspberry Pi
⭐
17
SRDS 2020: End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things
Onerl
⭐
16
One RL Platform is all you need -- Event-driven fully distributed reinforcement learning framework
Pipeedge
⭐
14
PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices
Enscale
⭐
12
An instant distributed computing library based on Ray Train and Ray Data
Skin Cancer Classifier
⭐
6
Skin cancer classification demo using Federated Learning techniques
Smart Calibration
⭐
5
Deep reinforcement learning for smart calibration of radio telescopes. Automatic hyper-parameter tuning.
Distributed Evolutionary Ml
⭐
5
A tool for experimenting with evolutionary optimization methods for machine learning algorithms, by distributing the workload over a large number of compute nodes on the IBM Cloud. For now, it only includes an implementation of [Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning](https://arxiv.org/abs/1712.06567).
Rpcdataloader
⭐
5
A variant of the PyTorch Dataloader using remote workers.
Related Searches
Python Pytorch (15,943)
Deep Learning Pytorch (7,533)
Jupyter Notebook Pytorch (4,892)
Machine Learning Pytorch (2,934)
Dataset Pytorch (1,848)
Pytorch Convolutional Neural Networks (1,777)
Pytorch Neural Network (1,631)
Pytorch Natural Language Processing (1,408)
Pytorch Computer Vision (1,230)
Pytorch Neural (1,217)
1-26 of 26 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.