Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Tensorflow | 179,189 | 327 | 78 | a day ago | 46 | October 23, 2019 | 2,109 | apache-2.0 | C++ | |
An Open Source Machine Learning Framework for Everyone | ||||||||||
Pytorch | 73,171 | 3,341 | 8,254 | 16 hours ago | 39 | November 15, 2023 | 13,133 | other | Python | |
Tensors and Dynamic neural networks in Python with strong GPU acceleration | ||||||||||
Yolov5 | 43,636 | 2 | 18 hours ago | 3 | June 08, 2022 | 202 | agpl-3.0 | Python | ||
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite | ||||||||||
Deepspeed | 29,918 | 87 | a day ago | 79 | December 01, 2023 | 867 | apache-2.0 | Python | ||
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. | ||||||||||
Fastai | 24,870 | 184 | 157 | 2 days ago | 147 | October 15, 2023 | 194 | apache-2.0 | Jupyter Notebook | |
The fastai deep learning library | ||||||||||
Pytorch Handbook | 18,594 | 4 months ago | 52 | Jupyter Notebook | ||||||
pytorch handbook是一本开源的书籍,目标是帮助那些希望和使用PyTorch进行深度学习开发和研究的朋友快速入门,其中包含的Pytorch教程全部通过测试保证可以成功运行 | ||||||||||
Lightgbm | 15,692 | 278 | 573 | 3 days ago | 34 | September 12, 2023 | 345 | mit | C++ | |
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks. | ||||||||||
Ivy | 13,805 | 2 | 7 | 18 hours ago | 103 | October 11, 2023 | 3,428 | other | Python | |
The Unified AI Framework | ||||||||||
Onnxruntime | 11,128 | 8 | 84 | a day ago | 39 | November 20, 2023 | 2,174 | mit | C++ | |
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator | ||||||||||
Turicreate | 11,102 | 17 | 4 | a month ago | 31 | September 30, 2020 | 520 | bsd-3-clause | C++ | |
Turi Create simplifies the development of custom machine learning models. |
Installation | Documentation | Examples | Support | FAQ
With Intel(R) Extension for Scikit-learn you can accelerate your Scikit-learn applications and still have full conformance with all Scikit-Learn APIs and algorithms. This is a free software AI accelerator that brings over 10-100X acceleration across a variety of applications. And you do not even need to change the existing code!
Intel(R) Extension for Scikit-learn offers you a way to accelerate existing scikit-learn code. The acceleration is achieved through patching: replacing the stock scikit-learn algorithms with their optimized versions provided by the extension.
One of the ways to patch scikit-learn is by modifying the code. First, you import an additional Python package (sklearnex
) and enable optimizations via sklearnex.patch_sklearn()
. Then import scikit-learn estimators:
Enable Intel CPU optimizations
import numpy as np
from sklearnex import patch_sklearn
patch_sklearn()
from sklearn.cluster import DBSCAN
X = np.array([[1., 2.], [2., 2.], [2., 3.],
[8., 7.], [8., 8.], [25., 80.]], dtype=np.float32)
clustering = DBSCAN(eps=3, min_samples=2).fit(X)
Enable Intel GPU optimizations
import numpy as np
import dpctl
from sklearnex import patch_sklearn, config_context
patch_sklearn()
from sklearn.cluster import DBSCAN
X = np.array([[1., 2.], [2., 2.], [2., 3.],
[8., 7.], [8., 8.], [25., 80.]], dtype=np.float32)
with config_context(target_offload="gpu:0"):
clustering = DBSCAN(eps=3, min_samples=2).fit(X)
👀 Read about other ways to patch scikit-learn and other methods for offloading to GPU devices. Check out available notebooks for more examples.
This software acceleration is achieved through the use of vector instructions, IA hardware-specific memory optimizations, threading, and optimizations for all upcoming Intel platforms at launch time.
❗ The patching only affects selected algorithms and their parameters.
You may still use algorithms and parameters not supported by Intel(R) Extension for Scikit-learn in your code. You will not get an error if you do this. When you use algorithms or parameters not supported by the extension, the package fallbacks into original stock version of scikit-learn.
Configurations:
System Requirements | Install via pip or conda | Build from sources
Intel(R) Extension for Scikit-learn is available at the Python Package Index, on Anaconda Cloud in Conda-Forge channel and in Intel channel. You can also build the extension from sources.
The extension is also available as a part of Intel® AI Analytics Toolkit (AI Kit). If you already have AI Kit installed, you do not need to install the extension.
Installation via pip
package manager is recommended by default:
pip install scikit-learn-intelex
We publish blogs on Medium, so follow us to learn tips and tricks for more efficient data analysis with the help of Intel(R) Extension for Scikit-learn. Here are our latest blogs:
No. The patching only affects selected algorithms and their parameters.
In cases when unsupported parameters are used, the package fallbacks into original stock version of scikit-learn. You will not get an error.
If you use algorithms for which no optimizations are available, their original version from the stock scikit-learn is used.
Yes. To find out which implementation of the algorithm is currently used (Intel(R) Extension for Scikit-learn or original Scikit-learn), use the verbose mode.
We compare the performance of Intel(R) Extension for Scikit-Learn to other frameworks in Machine Learning Benchmarks. Read our blogs on Medium if you are interested in the detailed comparison.
If the patching does not cover your scenarios, submit an issue on GitHub with the description of what you would want to have.
Report issues, ask questions, and provide suggestions using:
You may reach out to project maintainers privately at [email protected]
Intel(R) Extension for Scikit-learn is part of oneAPI and Intel® AI Analytics Toolkit (AI Kit).
The acceleration is achieved through the use of the Intel(R) oneAPI Data Analytics Library (oneDAL). Learn more:
⚠️Intel(R) Extension for Scikit-learn contains scikit-learn patching functionality that was originally available in daal4py package. All future updates for the patches will be available only in Intel(R) Extension for Scikit-learn. We recommend you to use scikit-learn-intelex package instead of daal4py. You can learn more about daal4py in daal4py documentation.