Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for multimodal foundation model
foundation-model
x
multimodal
x
5 search results found
Interngpt
⭐
2,976
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
Seed
⭐
326
Empowers LLMs with the ability to see and draw.
Recommendation Systems Without Explicit Id Features A Literature Review
⭐
171
Large pre-trained Foundation recommender models
Visual Med Alpaca
⭐
120
Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
Llark
⭐
97
Code for the paper "LLark: A Multimodal Foundation Model for Music" by Josh Gardner, Simon Durand, Daniel Stoller, and Rachel Bittner.
Related Searches
Python Multimodal (186)
Jupyter Notebook Multimodal (27)
Python Foundation Model (19)
Vqa Multimodal (12)
Image Captioning Multimodal (8)
Multimodal Vision Language (8)
Recommendation System Multimodal (7)
Multimodal Pre Training (6)
Gpt Multimodal (4)
Gpt 3 Multimodal (4)
1-5 of 5 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.