Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Tensorflow | 179,282 | 327 | 78 | 17 hours ago | 46 | October 23, 2019 | 2,112 | apache-2.0 | C++ | |
An Open Source Machine Learning Framework for Everyone | ||||||||||
Transformers | 116,723 | 64 | 2,484 | 12 hours ago | 125 | November 15, 2023 | 911 | apache-2.0 | Python | |
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. | ||||||||||
Pytorch | 73,346 | 3,341 | 8,272 | 17 hours ago | 39 | November 15, 2023 | 13,133 | other | Python | |
Tensors and Dynamic neural networks in Python with strong GPU acceleration | ||||||||||
Cs Video Courses | 61,557 | a month ago | 2 | |||||||
List of Computer Science courses with video lectures. | ||||||||||
Keras | 59,979 | 691 | a day ago | 86 | November 28, 2023 | 128 | apache-2.0 | Python | ||
Deep Learning for humans | ||||||||||
D2l Zh | 51,463 | 1 | 1 | 5 days ago | 51 | August 18, 2023 | 62 | apache-2.0 | Python | |
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被70多个国家的500多所大学用于教学。 | ||||||||||
Faceswap | 47,636 | 16 days ago | 19 | gpl-3.0 | Python | |||||
Deepfakes Software For All | ||||||||||
Yolov5 | 43,763 | 2 | 2 days ago | 3 | June 08, 2022 | 207 | agpl-3.0 | Python | ||
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite | ||||||||||
Deepfacelab | 43,455 | a month ago | 547 | gpl-3.0 | Python | |||||
DeepFaceLab is the leading software for creating deepfakes. | ||||||||||
Tensorflow Examples | 42,312 | a year ago | 218 | other | Jupyter Notebook | |||||
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2) |
a "Swiss Army knife" for machine learning
generative_ai
module that is now powered by our onprem.LLM package for ChatGPT-like generative AI running on your own machine. The generative_ai.LLM
class replaces the previous generative_ai.GenerativeAI
class (breaking change). See the example notebook for more information.from ktrain.text.generative_ai import LLM
llm = LLM() # for GPU inference, supply n_gpu_layers parameter
prompt = """Extract the names of people in the supplied sentences. Here is an example:
Sentence: James Gandolfini and Paul Newman were great actors.
People:
James Gandolfini, Paul Newman
Sentence:
I like Cillian Murphy's acting. Florence Pugh is great, too.
People:"""
saved_output = llm.prompt(prompt)
# OUTPUT
# Cillian Murphy, Florence Pugh
ktrain is a lightweight wrapper for the deep learning library TensorFlow Keras (and other libraries) to help build, train, and deploy neural networks and other machine learning models. Inspired by ML framework extensions like fastai and ludwig, ktrain is designed to make deep learning and AI more accessible and easier to apply for both newcomers and experienced practitioners. With only a few lines of code, ktrain allows you to easily and quickly:
employ fast, accurate, and easy-to-use pre-canned models for text
, vision
, graph
, and tabular
data:
text
data:
vision
data:
graph
data:
tabular
data:
estimate an optimal learning rate for your model given your data using a Learning Rate Finder
utilize learning rate schedules such as the triangular policy, the 1cycle policy, and SGDR to effectively minimize loss and improve generalization
build text classifiers for any language (e.g., Arabic Sentiment Analysis with BERT, Chinese Sentiment Analysis with NBSVM)
easily train NER models for any language (e.g., Dutch NER )
load and preprocess text and image data from a variety of formats
inspect data points that were misclassified and provide explanations to help improve your model
leverage a simple prediction API for saving and deploying both models and data-preprocessing steps to make predictions on new raw data
built-in support for exporting models to ONNX and TensorFlow Lite (see example notebook for more information)
Please see the following tutorial notebooks for a guide on how to use ktrain on your projects:
Some blog tutorials and other guides about ktrain are shown below:
ktrain: A Lightweight Wrapper for Keras to Help Train Neural Networks
Text Classification with Hugging Face Transformers in TensorFlow 2 (Without Tears)
Build an Open-Domain Question-Answering System With BERT in 3 Lines of Code
Finetuning BERT using ktrain for Disaster Tweets Classification by Hamiz Ahmed
Indonesian NLP Examples with ktrain by Sandy Khosasi
Using ktrain on Google Colab? See these Colab examples:
transformer
word embeddings
Tasks such as text classification and image classification can be accomplished easily with only a few lines of code.
import ktrain
from ktrain import text as txt
# load data
(x_train, y_train), (x_test, y_test), preproc = txt.texts_from_folder('data/aclImdb', maxlen=500,
preprocess_mode='bert',
train_test_names=['train', 'test'],
classes=['pos', 'neg'])
# load model
model = txt.text_classifier('bert', (x_train, y_train), preproc=preproc)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model,
train_data=(x_train, y_train),
val_data=(x_test, y_test),
batch_size=6)
# find good learning rate
learner.lr_find() # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using 1cycle learning rate schedule for 3 epochs
learner.fit_onecycle(2e-5, 3)
import ktrain
from ktrain import vision as vis
# load data
(train_data, val_data, preproc) = vis.images_from_folder(
datadir='data/dogscats',
data_aug = vis.get_data_aug(horizontal_flip=True),
train_test_names=['train', 'valid'],
target_size=(224,224), color_mode='rgb')
# load model
model = vis.image_classifier('pretrained_resnet50', train_data, val_data, freeze_layers=80)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data,
workers=8, use_multiprocessing=False, batch_size=64)
# find good learning rate
learner.lr_find() # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using triangular policy with ModelCheckpoint and implicit ReduceLROnPlateau and EarlyStopping
learner.autofit(1e-4, checkpoint_folder='/tmp/saved_weights')
import ktrain
from ktrain import text as txt
# load data
(trn, val, preproc) = txt.entities_from_txt('data/ner_dataset.csv',
sentence_column='Sentence #',
word_column='Word',
tag_column='Tag',
data_format='gmb',
use_char=True) # enable character embeddings
# load model
model = txt.sequence_tagger('bilstm-crf', preproc)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model, train_data=trn, val_data=val)
# conventional training for 1 epoch using a learning rate of 0.001 (Keras default for Adam optmizer)
learner.fit(1e-3, 1)
import ktrain
from ktrain import graph as gr
# load data with supervision ratio of 10%
(trn, val, preproc) = gr.graph_nodes_from_csv(
'cora.content', # node attributes/labels
'cora.cites', # edge list
sample_size=20,
holdout_pct=None,
holdout_for_inductive=False,
train_pct=0.1, sep='\t')
# load model
model=gr.graph_node_classifier('graphsage', trn)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=64)
# find good learning rate
learner.lr_find(max_epochs=100) # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using triangular policy with ModelCheckpoint and implicit ReduceLROnPlateau and EarlyStopping
learner.autofit(0.01, checkpoint_folder='/tmp/saved_weights')
# load text data
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)
test_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)
(x_train, y_train) = (train_b.data, train_b.target)
(x_test, y_test) = (test_b.data, test_b.target)
# build, train, and validate model (Transformer is wrapper around transformers library)
import ktrain
from ktrain import text
MODEL_NAME = 'distilbert-base-uncased'
t = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
learner.fit_onecycle(5e-5, 4)
learner.validate(class_names=t.get_classes()) # class_names must be string values
# Output from learner.validate()
# precision recall f1-score support
#
# alt.atheism 0.92 0.93 0.93 319
# comp.graphics 0.97 0.97 0.97 389
# sci.med 0.97 0.95 0.96 396
#soc.religion.christian 0.96 0.96 0.96 398
#
# accuracy 0.96 1502
# macro avg 0.95 0.96 0.95 1502
# weighted avg 0.96 0.96 0.96 1502
import ktrain
from ktrain import tabular
import pandas as pd
train_df = pd.read_csv('train.csv', index_col=0)
train_df = train_df.drop(['Name', 'Ticket', 'Cabin'], 1)
trn, val, preproc = tabular.tabular_from_df(train_df, label_columns=['Survived'], random_state=42)
learner = ktrain.get_learner(tabular.tabular_classifier('mlp', trn), train_data=trn, val_data=val)
learner.lr_find(show_plot=True, max_epochs=5) # estimate learning rate
learner.fit_onecycle(5e-3, 10)
# evaluate held-out labeled test set
tst = preproc.preprocess_test(pd.read_csv('heldout.csv', index_col=0))
learner.evaluate(tst, class_names=preproc.get_classes())
Make sure pip is up-to-date with: pip install -U pip
Install TensorFlow 2 if it is not already installed (e.g., pip install tensorflow
)
Install ktrain: pip install ktrain
The above should be all you need on Linux systems and cloud computing environments like Google Colab and AWS EC2. If you are using ktrain on a Windows computer, you can follow these more detailed instructions that include some extra steps.
Supported TensorFlow Versions: ktrain should currently support any version of TensorFlow at or above to v2.3: i.e., pip install tensorflow>=2.3
. However, if using tensorflow>=2.11
, then you must only use legacy optimizers such as tf.keras.optimizers.legacy.Adam
. The newer tf.keras.optimizers.Optimizer
base class is not supported at this time. For instance, when using TensorFlow 2.11 and above, please use tf.keras.optimzers.legacy.Adam()
instead of the string "adam"
in model.compile
. ktrain does this automatically when using out-of-the-box models (e.g., models from the transformers
library).
eli5
and stellargraph
libraries in order to support TensorFlow2.)# for graph module:
pip install https://github.com/amaiya/stellargraph/archive/refs/heads/no_tf_dep_082.zip
# for text.TextPredictor.explain and vision.ImagePredictor.explain:
pip install https://github.com/amaiya/eli5-tf/archive/refs/heads/master.zip
# for tabular.TabularPredictor.explain:
pip install shap
# for text.zsl (ZeroShotClassifier), text.summarization, text.translation, text.speech:
pip install torch
# for text.speech:
pip install librosa
# for tabular.causal_inference_model:
pip install causalnlp
# for text.summarization.core.LexRankSummarizer:
pip install sumy
# for text.kw.KeywordExtractor
pip install textblob
# for text.qa.generative_qa
pip install paper-qa==2.1.1 langchain==0.0.240
# for text.generative_ai
pip install onprem
ktrain purposely pins to a lower version of transformers to include support for older versions of TensorFlow. If you need a newer version of transformers
, it is usually safe for you to upgrade transformers
, as long as you do it after installing ktrain.
As of v0.30.x, TensorFlow installation is optional and only required if training neural networks. Although ktrain uses TensorFlow for neural network training, it also includes a variety of useful pretrained PyTorch models and sklearn models, which can be used out-of-the-box without having TensorFlow installed, as summarized in this table:
Feature | TensorFlow | PyTorch | Sklearn |
---|---|---|---|
training any neural network (e.g., text or image classification) | ✅ | ❌ | ❌ |
End-to-End Question-Answering (pretrained) | ✅ | ✅ | ❌ |
QA-Based Information Extraction (pretrained) | ✅ | ✅ | ❌ |
Zero-Shot Classification (pretrained) | ❌ | ✅ | ❌ |
Language Translation (pretrained) | ❌ | ✅ | ❌ |
Summarization (pretrained) | ❌ | ✅ | ❌ |
Speech Transcription (pretrained) | ❌ | ✅ | ❌ |
Image Captioning (pretrained) | ❌ | ✅ | ❌ |
Object Detection (pretrained) | ❌ | ✅ | ❌ |
Sentiment Analysis (pretrained) | ❌ | ✅ | ❌ |
GenerativeAI (sentence-transformers) | ❌ | ✅ | ❌ |
Topic Modeling (sklearn) | ❌ | ❌ | ✅ |
Keyphrase Extraction (textblob/nltk/sklearn) | ❌ | ❌ | ✅ |
As noted above, end-to-end question-answering and information extraction in ktrain can be used with either TensorFlow (using framework='tf'
) or PyTorch (using framework='pt'
).
Please cite the following paper when using ktrain:
@article{maiya2020ktrain,
title={ktrain: A Low-Code Library for Augmented Machine Learning},
author={Arun S. Maiya},
year={2020},
eprint={2004.10703},
archivePrefix={arXiv},
primaryClass={cs.LG},
journal={arXiv preprint arXiv:2004.10703},
}
Creator: Arun S. Maiya
Email: arun [at] maiya [dot] net