Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Omninet | 426 | 3 years ago | 1 | apache-2.0 | Python | |||||
Official Pytorch implementation of "OmniNet: A unified architecture for multi-modal multi-task learning" | Authors: Subhojeet Pramanik, Priyanka Agrawal, Aman Hussain | ||||||||||
Neuralmonkey | 385 | 4 years ago | 121 | bsd-3-clause | Python | |||||
An open-source tool for sequence learning in NLP built on TensorFlow. | ||||||||||
Awesome Foundation And Multimodal Models | 223 | 5 months ago | 2 | Python | ||||||
👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code] | ||||||||||
Deep_learning_in_python_2018 | 114 | a year ago | 1 | Jupyter Notebook | ||||||
Deep Learning workshop including image classification, face recognition, Object detection, language modelling, image captioning and neural machine translation. | ||||||||||
Clip Gpt Captioning | 71 | 4 months ago | 3 | mit | Python | |||||
CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2. | ||||||||||
Cs224n_project | 65 | 5 years ago | 1 | mit | Jupyter Notebook | |||||
Neural Image Captioning in TensorFlow. | ||||||||||
Image_captioning | 40 | 6 years ago | mit | Python | ||||||
generate captions for images using a CNN-RNN model that is trained on the Microsoft Common Objects in COntext (MS COCO) dataset | ||||||||||
Image Caption Generator | 37 | 5 years ago | n,ull | mit | Jupyter Notebook | |||||
The LSTM model generates captions for the input images after extracting features from pre-trained VGG-16 model. (Computer Vision, NLP, Deep Learning, Python) | ||||||||||
Punny_captions | 31 | 6 years ago | Python | |||||||
An implementation of the NAACL 2018 paper "Punny Captions: Witty Wordplay in Image Descriptions". | ||||||||||
Show Attend And Tell Keras | 25 | 5 years ago | 6 | mit | Python | |||||
Keras implementation of the "Show, Attend and Tell" paper |