Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for data parallelism
data-parallelism
x
17 search results found
Colossalai
⭐
37,814
Making large AI models cheaper, faster and more accessible
Deepspeed
⭐
32,358
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Dist Keras
⭐
611
Distributed Deep Learning, with a focus on distributed training, using Keras and Apache Spark.
Weave
⭐
500
A state-of-the-art multithreading runtime: message-passing based, fast, scalable, ultra-low overhead
Paddlefleetx
⭐
430
飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。
Libai
⭐
371
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Easyparallellibrary
⭐
201
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
Dkeras
⭐
166
Distributed Keras Engine, Make Keras faster with only one line of code.
Terngrad
⭐
152
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
Orkhon
⭐
96
Orkhon: ML Inference Framework and Server Runtime
Pipegoose
⭐
58
Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*
Keras_multi_gpu
⭐
38
Multi-GPU training for Keras
Enscale
⭐
12
An instant distributed computing library based on Ray Train and Ray Data
Sm Distributed Training Step By Step
⭐
9
This repository provides hands-on labs on PyTorch-based Distributed Training and SageMaker Distributed Training. It is written to make it easy for beginners to get started, and guides you through step-by-step modifications to the code based on the most basic BERT use cases.
Effect Dps Public
⭐
6
Understanding the effects of data parallelism and sparsity on neural network training
Parallel Matrix Multiplication Fox Algorithm
⭐
5
☕Implement of Parallel Matrix Multiplication Methods Using FOX Algorithm on Peking University's High-performance Computing System
Pytorch Transformer Distributed
⭐
5
Distributed training (multi-node) of a Transformer model
Related Searches
Python Data Parallelism (14)
Deep Learning Data Parallelism (9)
Distributed Training Data Parallelism (7)
Pytorch Data Parallelism (6)
Machine Learning Data Parallelism (6)
Tensorflow Data Parallelism (6)
C Data Parallelism (4)
Large Scale Data Parallelism (3)
Rust Data Parallelism (3)
Jupyter Notebook Data Parallelism (3)
1-17 of 17 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.