Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for pytorch multi modal learning
multi-modal-learning
x
pytorch
x
12 search results found
Open_clip
⭐
7,355
An open source implementation of CLIP.
Chinese Clip
⭐
2,816
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
Zeta
⭐
106
Build high-performance AI models with modular building blocks
Cgdetr
⭐
43
Official pytorch repository for CG-DETR "Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding"
Sugar Crepe
⭐
40
[NeurIPS 2023] A faithful benchmark for vision-language compositionality
Hyperdensenet_pytorch
⭐
29
Pytorch version of the HyperDenseNet deep neural network for multi-modal image segmentation
Trar Vqa
⭐
23
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task
Nemar
⭐
15
[CVPR2020] Unsupervised Multi-Modal Image Registration via Geometry Preserving Image-to-Image Translation
Multimodal Remote Sensing Toolkit
⭐
13
A python tool to perform deep learning experiments on multimodal remote sensing data.
Dramaqa
⭐
8
DramaQA Starter Code (2021)
Deep Learning Framework For Multi Modal Product Classification
⭐
7
Code repository for Rakuten Data Challenge: Multimodal Product Classification and Retrieval.
M2hse
⭐
6
PyTorch code for the paper "Complementarity is the king: A multi-modal and multi-grained hierarchical semantic enhancement network for cross-modal retrieval"
Related Searches
Python Pytorch (15,943)
Deep Learning Pytorch (7,533)
Jupyter Notebook Pytorch (4,892)
Machine Learning Pytorch (2,934)
Dataset Pytorch (1,848)
Pytorch Convolutional Neural Networks (1,777)
Pytorch Neural Network (1,631)
Pytorch Natural Language Processing (1,408)
Pytorch Computer Vision (1,230)
Pytorch Neural (1,217)
1-12 of 12 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.