Alternatives To Dogs_vs_cats
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Simplefs827
3 years ago2otherC
A simple, kernel-space, on-disk filesystem from the scratch
Onyx The Black Cat205
6 months ago4C
Kernel extension to disable anti-debug tricks and other useful XNU "features"
Meetup53
4 years agoHTML
Cat System Workshop is a regular meet-up focusing on “system software”. We would like to gather all developers to share their experience regarding system software and learn from each other, making system software more perfect and complete!
Lkl41
6 years agoShell
Linux Kernel Library for speed up
Sipodev19
4 years ago3C
Patch for the SIPODEV SP1064 touchpad
Takao14
2 years agogpl-3.0D
A kernel made with love, and lots of D.
Linux9
3 years ago1otherC
PLEASE NOTE: L3CAT/CDP, L2 CAT, CQM, MBM, and MBA are all in upstream kernel already. Please refer to upstream kernel for all future development, test, and usage. This tree will be not maintained for RDT features any more.
Tinyos8
5 years agomitC
A simple operating system on x86
Kittykernel6
5 years agogpl-3.0Python
Kittykernel - Maow all your kernel needs
Dogs_vs_cats5
7 years agoJupyter Notebook
Alternatives To Dogs_vs_cats
Select To Compare


Alternative Project Comparisons
Readme

Dogs vs Cats

VGG style convolution neural network with very leaky ReLU for the kaggle Dogs vs Cats competition. Currently gets 96.6% on kaggle leaderboards without using outside data and instead relying heavily on data augmentation for generalization. Small amount of fine tuning (finishing training with a small number of iterations with very low learning rate and no data augmentation).

Architecture

Layer Type Parameters
Input size: 168x168, channel: 3
convolution kernel: 3x3, channel: 32
leaky ReLU alpha = 0.2
convolution kernel: 3x3, channel: 32
leaky ReLU alpha = 0.2
max pool kernel: 2x2
dropout 0.1
convolution kernel: 3x3, channel: 64
leaky ReLU alpha = 0.2
convolution kernel: 3x3, channel: 64
leaky ReLU alpha = 0.2
max pool kernel: 2x2
dropout 0.2
convolution kernel: 3x3, channel: 128
leaky ReLU alpha = 0.2
convolution kernel: 3x3, channel: 128
leaky ReLU alpha = 0.2
convolution kernel: 3x3, channel: 128
leaky ReLU alpha = 0.2
max pool kernel: 2x2
dropout 0.3
fully connected units: 1024
leaky ReLU alpha = 0.2
dropout 0.5
fully connected units: 1024
leaky ReLU alpha = 0.2
dropout 0.5
softmax

Data augmentation

Images are randomly transformed 'on the fly' while they are being prepared in each batch. The CPU will prepare each batch while the GPU will run the previous batch through the network.

  • Random rotations between -30 and 30 degrees.
  • Random cropping between -24 and 24 pixels in any direction.
  • Random zoom between factors of 1 and 1.3.
  • Random shearing between -10 and 10 degrees.
  • Random intensity scaling on RGB channels, independent scaling on each channel.

ImgurImgur

To-do

Stream data from SSD instead of holding all images in memory (need to install SSD first). Try different network archetectures and data pre-processing. Try intensity scaling method from Krizhevsky, et al 2012.

References

  • Karen Simonyan, Andrew Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition", link
  • Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks", link
  • Sander Dieleman, "Classifying plankton with deep neural networks", link
Popular Kernel Projects
Popular Cats Projects
Popular Operating Systems Categories

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Jupyter Notebook
Kernel
Cats
Convolution
Augmentation