Awesome Open Source
Awesome Open Source

U-Net: Semantic segmentation with PyTorch

input and output for a random image in the test dataset

Customized implementation of the U-Net in PyTorch for Kaggle's Carvana Image Masking Challenge from high definition images.

Quick start

Without Docker

  1. Install CUDA

  2. Install PyTorch 1.12 or later

  3. Install dependencies

pip install -r requirements.txt
  1. Download the data and run training:
bash scripts/
python --amp

With Docker

  1. Install Docker 19.03 or later:
curl | sh && sudo systemctl --now enable docker
  1. Install the NVIDIA container toolkit:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L | sudo apt-key add - \
   && curl -s -L$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
  1. Download and run the image:
sudo docker run --rm --shm-size=8g --ulimit memlock=-1 --gpus all -it milesial/unet
  1. Download the data and run training:
bash scripts/
python --amp


This model was trained from scratch with 5k images and scored a Dice coefficient of 0.988423 on over 100k test images.

It can be easily used for multiclass segmentation, portrait segmentation, medical segmentation, ...


Note : Use Python 3.6 or newer


A docker image containing the code and the dependencies is available on DockerHub. You can download and jump in the container with (docker >=19.03):

docker run -it --rm --shm-size=8g --ulimit memlock=-1 --gpus all milesial/unet


> python -h
usage: [-h] [--epochs E] [--batch-size B] [--learning-rate LR]
                [--load LOAD] [--scale SCALE] [--validation VAL] [--amp]

Train the UNet on images and target masks

optional arguments:
  -h, --help            show this help message and exit
  --epochs E, -e E      Number of epochs
  --batch-size B, -b B  Batch size
  --learning-rate LR, -l LR
                        Learning rate
  --load LOAD, -f LOAD  Load model from a .pth file
  --scale SCALE, -s SCALE
                        Downscaling factor of the images
  --validation VAL, -v VAL
                        Percent of the data that is used as validation (0-100)
  --amp                 Use mixed precision

By default, the scale is 0.5, so if you wish to obtain better results (but use more memory), set it to 1.

Automatic mixed precision is also available with the --amp flag. Mixed precision allows the model to use less memory and to be faster on recent GPUs by using FP16 arithmetic. Enabling AMP is recommended.


After training your model and saving it to MODEL.pth, you can easily test the output masks on your images via the CLI.

To predict a single image and save it:

python -i image.jpg -o output.jpg

To predict a multiple images and show them without saving them:

python -i image1.jpg image2.jpg --viz --no-save

> python -h
usage: [-h] [--model FILE] --input INPUT [INPUT ...] 
                  [--output INPUT [INPUT ...]] [--viz] [--no-save]
                  [--mask-threshold MASK_THRESHOLD] [--scale SCALE]

Predict masks from input images

optional arguments:
  -h, --help            show this help message and exit
  --model FILE, -m FILE
                        Specify the file in which the model is stored
  --input INPUT [INPUT ...], -i INPUT [INPUT ...]
                        Filenames of input images
  --output INPUT [INPUT ...], -o INPUT [INPUT ...]
                        Filenames of output images
  --viz, -v             Visualize the images as they are processed
  --no-save, -n         Do not save the output masks
  --mask-threshold MASK_THRESHOLD, -t MASK_THRESHOLD
                        Minimum probability value to consider a mask pixel white
  --scale SCALE, -s SCALE
                        Scale factor for the input images

You can specify which model file to use with --model MODEL.pth.

Weights & Biases

The training progress can be visualized in real-time using Weights & Biases. Loss curves, validation curves, weights and gradient histograms, as well as predicted masks are logged to the platform.

When launching a training, a link will be printed in the console. Click on it to go to your dashboard. If you have an existing W&B account, you can link it by setting the WANDB_API_KEY environment variable. If not, it will create an anonymous run which is automatically deleted after 7 days.

Pretrained model

A pretrained model is available for the Carvana dataset. It can also be loaded from torch.hub:

net = torch.hub.load('milesial/Pytorch-UNet', 'unet_carvana', pretrained=True, scale=0.5)

Available scales are 0.5 and 1.0.


The Carvana data is available on the Kaggle website.

You can also download it using the helper script:

bash scripts/

The input images and target masks should be in the data/imgs and data/masks folders respectively (note that the imgs and masks folder should not contain any sub-folder or any other files, due to the greedy data-loader). For Carvana, images are RGB and masks are black and white.

You can use your own dataset as long as you make sure it is loaded properly in utils/

Original paper by Olaf Ronneberger, Philipp Fischer, Thomas Brox:

U-Net: Convolutional Networks for Biomedical Image Segmentation

network architecture

Alternative Project Comparisons
Related Awesome Lists
Top Programming Languages

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Python (890,046
Deep Learning (39,311
Pytorch (22,633
Convolutional Neural Networks (12,841
Segmentation (8,385
Kaggle (4,453
Semantic Segmentation (1,660
Tensorboard (1,324
Unet (1,288
Wandb (76
Weights And Biases (23
Pytorch Unet (3