Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Pytorch Grad Cam | 7,414 | 1 | 21 days ago | 25 | May 20, 2022 | 71 | mit | Python | ||
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more. | ||||||||||
Jetson Inference | 6,458 | 2 days ago | 219 | mit | C++ | |||||
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. | ||||||||||
Jeelizfacefilter | 2,402 | 3 | 23 days ago | 32 | September 09, 2022 | apache-2.0 | JavaScript | |||
Javascript/WebGL lightweight face tracking library designed for augmented reality webcam filters. Features : multiple faces detection, rotation, mouth opening. Various integration examples are provided (Three.js, Babylon.js, FaceSwap, Canvas2D, CSS3D...). | ||||||||||
Objectron | 1,958 | a year ago | 21 | other | Jupyter Notebook | |||||
Objectron is a dataset of short, object-centric video clips. In addition, the videos also contain AR session metadata including camera poses, sparse point-clouds and planes. In each video, the camera moves around and above the object and captures it from different views. Each object is annotated with a 3D bounding box. The 3D bounding box describes the object’s position, orientation, and dimensions. The dataset contains about 15K annotated video clips and 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes | ||||||||||
Berrynet | 1,563 | 6 months ago | 1 | November 01, 2020 | 19 | gpl-3.0 | Python | |||
Deep learning gateway on Raspberry Pi and other edge devices | ||||||||||
Torch Cam | 1,339 | a month ago | 6 | October 31, 2021 | 9 | apache-2.0 | Python | |||
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM) | ||||||||||
Deepcamera | 1,133 | 5 months ago | 11 | mit | JavaScript | |||||
Open-Source AI Camera. Empower any camera/CCTV with state-of-the-art AI, including facial recognition, person recognition(RE-ID) car detection, fall detection and more | ||||||||||
Jeelizweboji | 1,008 | a month ago | 2 | April 30, 2021 | apache-2.0 | JavaScript | ||||
JavaScript/WebGL real-time face tracking and expression detection library. Build your own emoticons animated in real time in the browser! SVG and THREE.js integration demos are provided. | ||||||||||
Tf Explain | 934 | 1 | 6 | a year ago | 8 | November 18, 2021 | 41 | mit | Python | |
Interpretability Methods for tf.keras models with Tensorflow 2.x | ||||||||||
Saliency | 872 | 1 | 2 | 2 months ago | 11 | June 14, 2022 | 9 | apache-2.0 | Jupyter Notebook | |
Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more). |
Code for the paper
Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, Dhruv Batra
https://arxiv.org/abs/1610.02391
Demo: gradcam.cloudcv.org
Download Caffe model(s) and prototxt for VGG-16/VGG-19/AlexNet using sh models/download_models.sh
.
th classification.lua -input_image_path images/cat_dog.jpg -label 243 -gpuid 0
th classification.lua -input_image_path images/cat_dog.jpg -label 283 -gpuid 0
proto_file
: Path to the deploy.prototxt
file for the CNN Caffe model. Default is models/VGG_ILSVRC_16_layers_deploy.prototxt
model_file
: Path to the .caffemodel
file for the CNN Caffe model. Default is models/VGG_ILSVRC_16_layers.caffemodel
input_image_path
: Path to the input image. Default is images/cat_dog.jpg
input_sz
: Input image size. Default is 224 (Change to 227 if using AlexNet)layer_name
: Layer to use for Grad-CAM. Default is relu5_3
(use relu5_4
for VGG-19 and relu5
for AlexNet)label
: Class label to generate grad-CAM for (-1 = use predicted class, 283 = Tiger cat, 243 = Boxer). Default is -1. These correspond to ILSVRC synset IDsout_path
: Path to save images in. Default is output/
gpuid
: 0-indexed id of GPU to use. Default is -1 = CPUbackend
: Backend to use with loadcaffe. Default is nn
save_as_heatmap
: Whether to save heatmap or raw Grad-CAM. 1 = save heatmap, 0 = save raw Grad-CAM. Default is 1'border collie' (233)
'tabby cat' (282)
'boxer' (243)
'tiger cat' (283)
Clone the VQA (http://arxiv.org/abs/1505.00468) sub-repository (git submodule init && git submodule update
), and download and unzip the provided extracted features and pretrained model.
th visual_question_answering.lua -input_image_path images/cat_dog.jpg -question 'What animal?' -answer 'dog' -gpuid 0
th visual_question_answering.lua -input_image_path images/cat_dog.jpg -question 'What animal?' -answer 'cat' -gpuid 0
proto_file
: Path to the deploy.prototxt
file for the CNN Caffe model. Default is models/VGG_ILSVRC_19_layers_deploy.prototxt
model_file
: Path to the .caffemodel
file for the CNN Caffe model. Default is models/VGG_ILSVRC_19_layers.caffemodel
input_image_path
: Path to the input image. Default is images/cat_dog.jpg
input_sz
: Input image size. Default is 224 (Change to 227 if using AlexNet)layer_name
: Layer to use for Grad-CAM. Default is relu5_4
(use relu5_3
for VGG-16 and relu5
for AlexNet)question
: Input question. Default is What animal?
answer
: Optional answer (For eg. "cat") to generate Grad-CAM for ('' = use predicted answer). Default is ''out_path
: Path to save images in. Default is output/
model_path
: Path to VQA model checkpoint. Default is VQA_LSTM_CNN/lstm.t7
gpuid
: 0-indexed id of GPU to use. Default is -1 = CPUbackend
: Backend to use with loadcaffe. Default is cudnn
save_as_heatmap
: Whether to save heatmap or raw Grad-CAM. 1 = save heatmap, 0 = save raw Grad-CAM. Default is 1What animal? Dog
What animal? Cat
What color is the fire hydrant? Green
What color is the fire hydrant? Yellow
What color is the fire hydrant? Green and Yellow
What color is the fire hydrant? Red and Yellow
Clone the neuraltalk2 sub-repository. Running sh models/download_models.sh
will download the pretrained model and place it in the neuraltalk2 folder.
Change lines 2-4 of neuraltalk2/misc/LanguageModel.lua
to the following:
local utils = require 'neuraltalk2.misc.utils'
local net_utils = require 'neuraltalk2.misc.net_utils'
local LSTM = require 'neuraltalk2.misc.LSTM'
th captioning.lua -input_image_path images/cat_dog.jpg -caption 'a dog and cat posing for a picture' -gpuid 0
th captioning.lua -input_image_path images/cat_dog.jpg -caption '' -gpuid 0
input_image_path
: Path to the input image. Default is images/cat_dog.jpg
input_sz
: Input image size. Default is 224 (Change to 227 if using AlexNet)layer
: Layer to use for Grad-CAM. Default is 30 (relu5_3 for vgg16)caption
: Optional input caption. No input will use the generated caption as defaultout_path
: Path to save images in. Default is output/
model_path
: Path to captioning model checkpoint. Default is neuraltalk2/model_id1-501-1448236541.t7
gpuid
: 0-indexed id of GPU to use. Default is -1 = CPUbackend
: Backend to use with loadcaffe. Default is cudnn
save_as_heatmap
: Whether to save heatmap or raw Grad-CAM. 1 = save heatmap, 0 = save raw Grad-CAM. Default is 1a dog and cat posing for a picture
a bathroom with a toilet and a sink
BSD