Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Transformers | 102,613 | 64 | 911 | 20 hours ago | 91 | June 21, 2022 | 724 | apache-2.0 | Python | |
๐ค Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. | ||||||||||
Stable Diffusion Webui | 80,323 | 19 hours ago | 1,928 | agpl-3.0 | Python | |||||
Stable Diffusion web UI | ||||||||||
Pytorch | 67,463 | 146 | 19 hours ago | 23 | August 10, 2022 | 12,145 | other | Python | ||
Tensors and Dynamic neural networks in Python with strong GPU acceleration | ||||||||||
Real Time Voice Cloning | 41,693 | a month ago | 129 | other | Python | |||||
Clone a voice in 5 seconds to generate arbitrary speech in real-time | ||||||||||
Yolov5 | 39,026 | 19 hours ago | 35 | May 21, 2022 | 276 | agpl-3.0 | Python | |||
YOLOv5 ๐ in PyTorch > ONNX > CoreML > TFLite | ||||||||||
Made With Ml | 33,193 | a month ago | 5 | May 15, 2019 | 11 | mit | Jupyter Notebook | |||
Learn how to responsibly develop, deploy and maintain production machine learning applications. | ||||||||||
Mockingbird | 28,543 | 2 | a month ago | 9 | February 28, 2022 | 411 | other | Python | ||
๐AIๆๅฃฐ: 5็งๅ ๅ ้ๆจ็ๅฃฐ้ณๅนถ็ๆไปปๆ่ฏญ้ณๅ ๅฎน Clone a voice in 5 seconds to generate arbitrary speech in real-time | ||||||||||
Gfpgan | 27,651 | 1 | 2 months ago | 11 | February 15, 2022 | 209 | other | Python | ||
GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration. | ||||||||||
Pytorch Tutorial | 26,129 | 2 months ago | 85 | mit | Python | |||||
PyTorch Tutorial for Deep Learning Researchers | ||||||||||
Ray | 25,863 | 80 | 199 | 19 hours ago | 76 | June 09, 2022 | 2,888 | apache-2.0 | Python | |
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for accelerating ML workloads. |
Embedding image and sentence into fixed-length vectors via CLIP
CLIP-as-service is a low-latency high-scalability embedding service for images and texts. It can be easily integrated as a microservice into neural search solutions.
โก Fast: Serve CLIP models with ONNX runtime and PyTorch JIT with 800QPS[*]. Non-blocking duplex streaming on requests and responses, designed for large data and long-running tasks.
๐ซ Elastic: Horizontally scale up and down multiple CLIP models on single GPU, with automatic load balancing.
๐ฅ Easy-to-use: No learning curve, minimalist design on client and server. Intuitive and consistent API for image and sentence embedding.
๐ Modern: Async client support. Easily switch between gRPC, HTTP, Websocket protocols with TLS and compressions.
๐ฑ Integration: Smoothly integrated with neural search ecosystem including Jina and DocArray. Build cross-modal and multi-modal solution in no time.
[*] with default config (single replica, PyTorch no JIT) on GeForce RTX 3090.
CLIP-as-service consists of two Python packages clip-server
and clip-client
that can be installed independently. Both require Python 3.7+.
pip install clip-server
To run CLIP model via ONNX (default is via PyTorch):
pip install "clip-server[onnx]"
pip install clip-client
You can run a simple connectivity check after install.
C/S | Command | Expect output |
---|---|---|
Server |
|
|
Client |
|
|
You can change 0.0.0.0
to the intranet or public IP address to test the connectivity over private and public network. If you encounter some errors, please find the solution here.
python -m clip_server
. Remember its address and port. from clip_client import Client
c = Client('grpc://87.191.159.105:51000')
r = c.encode(['First do it', 'then do it right', 'then do it better'])
print(r.shape) # [3, 512]
r = c.encode(['apple.png', # local image
'https://docarray.jina.ai/_static/favicon.png', # remote image
'data:image/gif;base64,R0lGODlhEAAQAMQAAORHHOVSKudfOulrSOp3WOyDZu6QdvCchPGolfO0o/XBs/fNwfjZ0frl3/zy7////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAkAABAALAAAAAAQABAAAAVVICSOZGlCQAosJ6mu7fiyZeKqNKToQGDsM8hBADgUXoGAiqhSvp5QAnQKGIgUhwFUYLCVDFCrKUE1lBavAViFIDlTImbKC5Gm2hB0SlBCBMQiB0UjIQA7']) # in image URI
print(r.shape) # [3, 512]
More comprehensive server & client configs can be found in the docs.
Let's build a text-to-image search using CLIP-as-service. Namely, user input a sentence and the program returns the matched images. We will use Totally Looks Like dataset and DocArray package. Note that DocArray is included within clip-client
as an upstream dependency, so you don't need to install it separately.
First we load images. You can simply pull it from Jina Cloud:
from docarray import DocumentArray
da = DocumentArray.pull('ttl-original', show_progress=True, local_cache=True)
Alternatively, you can go to Totally Looks Like official website, unzip and load images as follows:
from docarray import DocumentArray
da = DocumentArray.from_files(['left/*.jpg', 'right/*.jpg'])
The dataset contains 12,032 images, hence it may take half minute to pull. Once done, you can visualize it and get the first taste of those images.
da.plot_image_sprites()
Start the server with python -m clip_server
. Say it is at 87.191.159.105:51000
with GRPC
protocol (you will get this information after running the server).
Create a Python client script:
from clip_client import Client
c = Client(server='grpc://87.191.159.105:51000')
da = c.encode(da, show_progress=True)
Depending on your GPU and client-server network, it could take a while to embed 12K images. In my case, it takes ~2 minute.
For people who are impatient or lack of GPU, waiting can be a hell. In this case, you can simply pull our pre-encoded image dataset.
from docarray import DocumentArray
da = DocumentArray.pull('ttl-embedding', show_progress=True, local_cache=True)
Let's build a simple prompt to allow user to type sentence:
while True:
vec = c.encode([input('sentence> ')])
r = da.find(query=vec, limit=9)
r.plot_image_sprites()
Now you can input arbitrary English sentences and view the top-9 matched images. Search is fast and instinct. Let's have some fun:
"a happy potato" | "a super evil AI" | "a guy enjoying his burger" |
---|---|---|
|
|
|
"professor cat is very serious" | "an ego engineer lives with parent" | "there will be no tomorrow so lets eat unhealthy" |
---|---|---|
|
|
|
Let's save the embedding result for our next example:
da.save_binary('ttl-image')
We can also switch the input and output of the last program to achieve image-to-text search. Precisely, given a query image find the sentence that best describes the image.
Let's use all sentences from the book "Pride and Prejudice".
from docarray import Document, DocumentArray
d = Document(uri='https://www.gutenberg.org/files/1342/1342-0.txt').load_uri_to_text()
da = DocumentArray(
Document(text=s.strip()) for s in d.text.replace('\r\n', '').split('.') if s.strip()
)
Let's look at what we got:
da.summary()
Documents Summary
Length 6403
Homogenous Documents True
Common Attributes ('id', 'text')
Attributes Summary
Attribute Data type #Unique values Has empty value
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
id ('str',) 6403 False
text ('str',) 6030 False
Now encode these 6403 sentences, it may take 10s or less depending on your GPU and network:
from clip_client import Client
c = Client('grpc://87.191.159.105:51000')
r = c.encode(da, show_progress=True)
Again, for people who are impatient or lack of GPU, we have prepared a pre-encoded text dataset.
from docarray import DocumentArray
da = DocumentArray.pull('ttl-textual', show_progress=True, local_cache=True)
Let's load our previously stored image embedding; randomly sample image Document from it, then find top-1 nearest neighbour of each.
from docarray import DocumentArray
img_da = DocumentArray.load_binary('ttl-image')
for d in img_da.sample(10):
print(da.find(d.embedding, limit=1)[0].text)
Fun time! Note, unlike the previous example, here the input is an image, the sentence is the output. All sentences come from the book "Pride and Prejudice".
|
|
|
|
|
Besides, there was truth in his looks | Gardiner smiled | whatโs his name | By tea time, however, the dose had been enough, and Mr | You do not look well |
|
|
|
|
|
โA gamester!โ she cried | If you mention my name at the Bell, you will be attended to | Never mind Miss Lizzyโs hair | Elizabeth will soon be the wife of Mr | I saw them the night before last |
Intrigued? That's only scratching the surface of what CLIP-as-service is capable of. Read our docs to learn more.
CLIP-as-service is backed by Jina AI and licensed under Apache-2.0. We are actively hiring AI engineers, solution engineers to build the next neural search ecosystem in open-source.