Dataset of images of trash; Torch-based CNN for garbage image classification
Alternatives To Trashnet
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Pytorch Cyclegan And Pix2pix21,090
2 days ago519otherPython
Image-to-Image Translation in PyTorch
Tensor2tensor13,7018286 months ago79June 17, 2020589apache-2.0Python
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
3 years ago76otherLua
Image-to-image translation with conditional adversarial nets
5 years ago41bsd-2-clausePython
Not Suitable for Work (NSFW) classification using deep neural network Caffe models.
Stargan V23,185
7 months ago95otherPython
StarGAN v2 - Official PyTorch Implementation (CVPR 2020)
Ffhq Dataset2,842
a year ago5otherPython
Flickr-Faces-HQ Dataset (FFHQ)
Covid Chestxray Dataset2,841
a year ago41Jupyter Notebook
We are building an open database of COVID-19 cases with chest X-ray or CT images.
2 months ago12August 02, 2022132mitPython
A synthetic data generator for text recognition
Img2dataset2,797216 days ago83August 20, 2023112mitPython
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.
a year ago2mitJupyter Notebook
Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
Alternatives To Trashnet
Select To Compare

Alternative Project Comparisons


Code (only for the convolutional neural network) and dataset for mine and Mindy Yang's final project for Stanford's CS 229: Machine Learning class. Our paper can be found here. The convolutional neural network results on the poster are dated since we continued working after the end of the quarter and were able to achieve around 75% test accuracy (with 70/13/17 train/val/test split) after changing the weight initialization to the Kaiming method.


This repository contains the dataset that we collected. The dataset spans six classes: glass, paper, cardboard, plastic, metal, and trash. Currently, the dataset consists of 2527 images:

  • 501 glass
  • 594 paper
  • 403 cardboard
  • 482 plastic
  • 410 metal
  • 137 trash

The pictures were taken by placing the object on a white posterboard and using sunlight and/or room lighting. The pictures have been resized down to 512 x 384, which can be changed in data/ (resizing them involves going through step 1 in usage). The devices used were Apple iPhone 7 Plus, Apple iPhone 5S, and Apple iPhone SE.

The size of the original dataset, ~3.5GB, exceeds the git-lfs maximum size so it has been uploaded to Google Drive. If you are planning on using the Python code to preprocess the original dataset, then download from the link below and place the unzipped folder inside of the data folder.

If you are using the dataset, please give a citation of this repository. The dataset can be downloaded here.


Lua setup

We wrote code in Lua using Torch; you can find installation instructions here. You'll need the following Lua packages:

After installing Torch, you can install these packages by running the following:

# Install using Luarocks
luarocks install torch
luarocks install nn
luarocks install optim
luarocks install image
luarocks install gnuplot

We also need @e-lab's weight-init module, which is already included in this repository.

CUDA support

Because training takes awhile, you will want to use a GPU to get results in a reasonable amount of time. We used CUDA with a GTX 650 Ti with CUDA. To enable GPU acceleration with CUDA, you'll first need to install CUDA 6.5 or higher. Find CUDA installations here.

Then you need to install following Lua packages for CUDA:

You can install these packages by running the following:

luarocks install cutorch
luarocks install cunn

Python setup

Python is currently used for some image preprocessing tasks. The Python dependencies are:

You can install these packages by running the following:

# Install using pip
pip install numpy scipy


Step 1: Prepare the data

Unzip data/

If adding more data, then the new files must be enumerated properly and put into the appropriate folder in data/dataset-original and then preprocessed. Preprocessing the data involves deleting the data/dataset-resized folder and then calling python from trashnet/data. This will take around half an hour.

Step 2: Train the model


Step 3: Test the model


Step 4: View the results



  1. Fork it!
  2. Create your feature branch: git checkout -b my-new-feature
  3. Commit your changes: git commit -m 'Add some feature'
  4. Push to the branch: git push origin my-new-feature
  5. Submit a pull request



  • finish the Usage portion of the README
  • add specific results (and parameters used) that were achieved after the CS 229 project deadline
  • add saving of confusion matrix data and creation of graphic to plot.lua
  • rewrite the data preprocessing to only reprocess new images if the dimensions have not changed
Popular Image Projects
Popular Dataset Projects
Popular Media Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Deep Learning
Convolutional Neural Networks
Image Classification