Awesome Open Source
Awesome Open Source


Build Status Ansible Galaxy

Example project showing how to provision, deploy and run Spring Boot apps inside Docker Windows Containers on Windows Host using Packer, Powershell, Vagrant & Ansible

This is a follow-up to the repository ansible-windows-springboot and the blog post Running Spring Boot Apps on Windows with Ansible ( There are some corresponding follow up blog posts available:

This repository uses the following example Spring Boot / Cloud applications for provisioning: cxf-spring-cloud-netflix-docker

Table of Contents

Before you start...

Preperation: Find a Windows Box - the Evalutation ISO

Step 0 - How to build Ansible-ready Vagrant box from a Windows ISO

Step 1 - Prepare your Windows Box to run Docker Windows Containers with Ansible

Step 2 - How to run a simple Spring Boot App inside a Docker Windows Container with Ansible

Step 3 - How to scale multiple Spring Boot Apps inside a Docker Windows Containers with Ansible, docker-compose and Spring Cloud Netflix

Docker Container Orchestration with Linux & Windows mixed OS setup

Step 4 - A Multi-machine Windows- & Linux- mixed OS Vagrant setup for Docker Swarm

Step 5 - Deploy multiple Spring Boot Apps on mixed-OS Docker Windows- & Linux Swarm with Ansible

Before you start...

Because Microsoft & Docker Inc. developed a native Docker implementation on Windows using Hyper-V (or even a thinner layer) which let´s you run tiny little Windows containers inside your Windows box, which are accessible through the Docker API, I wanted to get my hands on them as soon as I heard of it. A list of example Windows Docker Images is provided here.

Firing up Spring Boot apps with Ansible on Windows using Docker sound´s like the next step after Running Spring Boot Apps on Windows with Ansible (

Before we start: The most important point here is to start with a correct Build Number of Windows 10 (1607, Anniversary Update)/Windows Server 2016. It took me days to figure that out, but it won´t work with for example 10.0.14393.67 - but will with 10.0.14393.206! I ran over the advice on this howto to fast, because I thought "Windows 10 Anniversary should be enough, don´t bother me with build numbers". But take care! The final docker run won´t work (but all the other steps before, which made it so hard to understand)... Here are two examples of the output of a (Get-ItemProperty -Path c:\windows\system32\hal.dll).VersionInfo.FileVersion on a Powershell:

Good build number:


Bad build number:


Because of the minimal required build of Windows 10 or Server 2016, we sadly can´t use the easy to download and easy to handle Vagrant box with Windows 10 from the Microsoft Edge developer site. So we have to look after an alternative box to start from. My goal was to start from a original Microsoft Image and show a 100% replicable way how to get to a running Vagrant box. Because besides the Microsoft Edge boxes (which have don´t have the correct build number for now) there aren´t any official from Microsoft around in Vagrant Atlas. And hey - we´re dealing with Windows! I don´t want to have someone installing things on a VM I don´t know...

Preperation: Find a Windows Box - the Evalutation ISO

After a bit of research, you´ll find another way to evaluate a current Windows Version: The Windows 10 Enterprise Evalutation ISO or the Windows Server 2016 Evalutation ISO. Both the Windows 2016 Server and the 10 Enterprise come with a 180 Days Evaluation licence (you have to register a live-ID for that).

DISCLAIMER: There are two Windows Container Types : Windows Server Containers (aka isolation level "process" or shared Windows kernel) and Hyper-V-Containers (aka isolation level "hyper-v"). Windows 10 only supports the latter one. But Hyper-V-Containers seem not the thing you´re used to, when it comes to the Docker core concepts - because Docker relies on Process-Level isolation and does not use a Hypervisor. So with that knowledge I would strongly encourage you to go with Windows Server 2016 and leave Windows 10 behind. At first glance it seems somehow "easier" to start with the "smaller" Windows 10. But don´t do that! I can also back this advice with lot´s of ours (when not days) trying to get things to work for myself or with customers - and finally switching to Windows Server and everything was just fine!

So if you really want to go with Windows 10 anyway, it shouldn´t be that much work to write your own Packer template and use the other ISO instead. Here we´ll stay with Windows Server 2016.

Step 0 - How to build Ansible-ready Vagrant box from a Windows ISO (step0-packer-windows-vagrantbox)

The problem with an ISO - it´s not a nice Vagrant box we can fireup easily for development. But hey! There´s something for us: This smart tool is able to produce machine images in every flavour - also as a Vagrant box ;) And from the docs: "[Packer] ... is in fact how the official boxes distributed by Vagrant are created." On a Mac you can install it with:

brew install packer

We also install Windows completely unattended - which means, we don´t have to click on a single installation screen ;) And we configure it already completely for compatibility with Ansible. Which means several things:

  • configure WinRM (aka Powershell remoting) correctly (including Firewall settings)
  • install VirtualBox Guest tools (just for better usability)
  • configure Ansible connectivity

The WinRM connectivity is configured through the [Autounattend.xml[( At the end we run the configure-ansible.ps1 - which will call the But this is done mostly for habing a better feeling, because WinRM should be configured already sufficiently.

If you like to dig deeper into the myriads of configuration options, have a look into Stefan Scherers GitHub repositories, e.g. - where I learned everything I had to - and also borrowed the mentioned [Autounattend.xml[( from. You can also create one yourself from ground up - but you´ll need a running Windows instance and then install the Windows Assessment and Deployment Kit (Windows ADK).

Build your Windows Server 2016 Vagrant box

Download the Windows_Server_2016_Datacenter_EVAL_en-us_14393_refresh.ISO and place it into the /packer folder.

Inside the step0-packer-windows-vagrantbox directory start the build with this command:

packer build -var iso_url=Windows_Server_2016_Datacenter_EVAL_en-us_14393_refresh.ISO -var iso_checksum=70721288bbcdfe3239d8f8c0fae55f1f windows_server_2016_docker.json

Now get yourself a coffee. This will take some time ;)

Add the box and run it

After successful packer build, you can init the Vagrant box (and receive a Vagrantfile):

vagrant init 

Now fire up your Windows Server 2016 box:

vagrant up

Step 1 - Prepare your Windows Box to run Docker Windows Containers with Ansible (step1-prepare-docker-windows)

If you don´t want to go with the discribed way of using packer to build your own Vagrant box and start with your own custom Windows Server 2016 machine right away - no problem! Just be sure to prepare your machine correctly for Ansible.

Now let´s check the Ansible connectivity. cd.. into the root folder ansible-windows-docker-springboot:

ansible ansible-windows-docker-springboot-dev -i hostsfile -m win_ping

Getting a SUCCESS responde, we can start to prepare our Windows box to run Windows Docker Containers. Let´s run the preparation playbook:

ansible-playbook -i hostsfile prepare-docker-windows.yml --extra-vars "host=ansible-windows-docker-springboot-dev"

This does those things for you:

  • Checking, if you have the correct minimum build version of Windows
  • Install the necessary Windows Features containers and Hyper-V (this is done Windows Version agnostic - so it will work with Windows 10 AND Server 2016 - which is quite unique, becaue Microsoft itself always distinquishes between these versions)
  • Reboot your Windows Box, if necessary
  • Install the current Docker version (via chocolatey docker package. And although the package claims to only install the client, it also provides the Docker Server (which means this is 100% identical with the step 2. Install Docker in Microsoft´s tutorial).)
  • Register and Start the Docker Windows service
  • Installing docker-compose (this is only needed for multiple containers)
  • Running a first Windows container inside your Windows box (via docker run microsoft/dotnet-samples:dotnetapp-nanoserver)
  • Building the springboot-oraclejre-nanoserver Docker image to run our Spring Boot Apps later on

If Docker on Windows with Windows Docker Containers is fully configured, you should see something like this (which definitely means, Docker is running perfectly fine on your Windows box!):

TASK [Docker is ready on your Box and waiting for your Containers :)] **********
ok: [] => {
    "msg": [
        "        Dotnet-bot: Welcome to using .NET Core!", 
        "    __________________", 
        "                      \\", 
        "                       \\", 
        "                          ....", 
        "                          ....'", 
        "                           ....", 
        "                        ..........", 
        "                    .............'..'..", 
        "                 ................'..'.....", 
        "               .......'..........'..'..'....", 
        "              ........'..........'..'..'.....", 
        "             .'....'..'..........'..'.......'.", 
        "             .'..................'...   ......", 
        "             .  ......'.........         .....", 
        "             .                           ......", 
        "            ..    .            ..        ......", 
        "           ....       .                 .......", 
        "           ......  .......          ............", 
        "            ................  ......................", 
        "            ........................'................", 
        "           ......................'..'......    .......", 
        "        .........................'..'.....       .......", 
        "     ........    ..'.............'..'....      ..........", 
        "   ..'..'...      ...............'.......      ..........", 
        "  ...'......     ...... ..........  ......         .......", 
        " ...........   .......              ........        ......", 
        ".......        '...'.'.              '.'.'.'         ....", 
        ".......       .....'..               ..'.....", 
        "   ..       ..........               ..'........", 
        "          ............               ..............", 
        "         .............               '..............", 
        "        ...........'..              .'.'............", 
        "       ...............              .'.'.............", 
        "      .............'..               ..'..'...........", 
        "      ...............                 .'..............", 
        "       .........                        ..............", 
        "        .....", 
        "Platform: .NET Core 1.0", 
        "OS: Microsoft Windows 10.0.14393 ", 

Step 2 - How to run a simple Spring Boot App inside a Docker Windows Container with Ansible (step2-single-spring-boot-app)

Everything needed here is inside step2-single-spring-boot-app. Be sure to have cloned and (Maven-) build the example simple Spring Boot app weatherbackend. Let´s cd into step2-single-spring-boot-app and run the playbook:

ansible-playbook -i hostsfile ansible-windows-docker-springboot.yml --extra-vars "host=ansible-windows-docker-springboot-dev app_name=weatherbackend jar_input_path=../../cxf-spring-cloud-netflix-docker/weatherbackend/target/weatherbackend-0.0.1-SNAPSHOT.jar"

This should run a single Spring Boot app inside a Docker Windows Container on your Windows box.


Step 3 - How to scale multiple Spring Boot Apps inside a Docker Windows Containers with Ansible, docker-compose and Spring Cloud Netflix (step4-multiple-spring-boot-apps-docker-compose)

Everything needed here is inside step3-multiple-spring-boot-apps-docker-compose. Be sure to have cloned and (Maven-) build the complete Spring Cloud example apps cxf-spring-cloud-netflix-docker. Let´s cd into step3-multiple-spring-boot-apps-docker-compose and run the playbook:

ansible-playbook -i hostsfile ansible-windows-docker-springboot.yml --extra-vars "host=ansible-windows-docker-springboot-dev"

This will fire up multiple containers running Spring Boot Apps inside Docker Windows Containers on your Windows box. They will leverage the power of Spring Cloud Netflix with Zuul as a Proxy and Eureka as Service Registry (which dynamically tells Zuul, which Apps to route).

But with docker-compose you are now able to easily fire up your hole application with one command (docker-compose) and deployment with Ansible also gets much faster through that (all the Docker containers are build at once and startet in parallel). Additionally, it is now possible to easily scale your Containers. If you want to scale the weatherbackend from 1 to 5 containers for example, just do the following inside _c:\spring-boot_ :

docker-compose scale weatherbackend=5

A few seconds later (depending on the power of your machine), you should be able to see them all in Eureka:


Best practices

Using Powershell on Host to Connect to Container
docker ps -a 

Look up your containers´ id, then do

docker exec -ti YourContainerIdHere powershell
Check if your Spring Boot app is running inside the Container
iwr http://localhost:8080/swagger-ui.html -UseBasicParsing
Set Proxy with Ansible, if you have a corporate firewall


  - name: Set Proxy for docker pull to work (http)
      state: present
      name: HTTP_PROXY
      value: http://username:[email protected]:port/
      level: machine

  - name: Set Proxy for docker pull to work (https)
      state: present
      name: HTTPS_PROXY
      value: http://username:[email protected]:port/
      level: machine

Known Issues

If something doesnt work as expected, see this guide here

Especially this command here is useful to check, whether something isn´t working as expected:

Invoke-WebRequest -UseBasicParsing | Invoke-Expression

And network seems to be a really tricky part with all this non-localhost, Hyper-V network-stuff ...


Good overview:

windows-docker-network-architecture (from

Useful commands

Show Docker Networks

docker network ls

Inspect one of these networks

docker network inspect networkNameHere

A good (e.g. working) starting configuration for the Windows Docker Network shows something like this


If the "IPAM" section shows an empty Subnet & Gateway, you may have the problem, that your NAT wont work and you can´t connect to your Docker-Containers from the Windows Docker Host itself (see Caveats and Gotchas section on

localhost to forward to Windows Containers isn´t working as expected

On Windows it isn´t possible to do what you know from Linux: Run a docker run -d -p 80:80 microsoft/iis and go to http://localhost won´t work sadly! But before I hear you scream: "Hey, why is that -p 80:80 thingy for - if that simple thing isn´t working?!" Well, if you come from outside the Windows Docker Host Maschine and try this IP, it will work - so everything will work, except of your localhost-Tests :D

But this is only temporarily --> The Windows Docker team is on it and the fix will be released soon as a Windows Update - see github Issue 458

Network docs

Helpful knowledge! Docker Windows Networks work slightly different to Linux ones (localhost!)


Microsoft & Docker Inc docs

Windows Containers Documentation

Configure Docker on Windows

Newest Insider builds: or here

Good resources

Walktrough Windows Docker Containers

Video: John Starks’ black belt session about Windows Server & Docker at DockerCon ‘16

Install docker-compose on Windows Server 2016:

If Service discovery doen´t work reliable:

Docker Container Orchestration with Linux & Windows mixed OS setup

Example steps showing how to provision and run Spring Boot Apps with Docker Swarm & Docker in mixed mode on Linux AND Windows (Docker Windows Containers!)

This is a follow-up to the blog post Scaling Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Spring Cloud Netflix and Docker Compose). There are some corresponding follow up blog posts available:

Why more?

We went quite fare with that setup - and broke up most boundaries inside our heads, what´s possible with Windows. But there´s one step left: leaving the one machine our services are running on and do a step further to go for a multi-machine setup, incl. blue-green-deployments/no-time-out-deployments and kind of "bring-my-hole-app-UP" (regardles, on which server it is running)

Kubernetes or Docker Swarm?

Everything seems to point to Kubernetes - biggest mediashare, most google searches, most blog posts and so on. But there´s one thing that today bring´s me on the Docker Swarm path: And that´s the Docker Windows Container Support in the current feature set implemented. As of Kubernetes 1.6 Windows Server 2016 (which is capable of running Windows Server Containers) there´s a basic support of Docker Windows Containers in Kubernetes - with two main limitations:

  • Networksubsystem HNS isn´t really Kubernetes-ready - so you have to manually put Routingtables together
  • Only one Docker Container per Pod is supported right now

Both things mean, that you litterally can´t leverage the benefits of Kubernetes as a Container Orchestration tool with Docker Windows Containers right now. Things might change soon though, if Microsoft releases it´s Version 1709 Windows Server 2016 and Kubernetes 1.8 goes live. But both isn´t now, so we first of all have to go with the competitor Docker Swarm, which should be also a good thing to start - and we´ll later switch to Kubernetes.

Step 4 - A Multi-machine Windows- & Linux- mixed OS Vagrant setup for Docker Swarm (step4-windows-linux-multimachine-vagrant-docker-swarm-setup)

There are basically two options to achieve a completely comprehensible setup: running more than one virtual machine on your local machine or go into the cloud. To decide which way to go, I had to rethink about what I wanted to show with this project. My goal is to show a setup of an Docker Orchestration tool to scale Docker Containers on both Windows and Linux - without messing with the specialities of one of the many cloud providers. Not to mention the financial perspective. So for the first setup, I wanted to go with a few virtual machines that run on my laptop.

As I really got to love Vagrant as a tool to handle my Virtual machines, why not do it with that again? And thank´s to a colleague of mine´s hint, I found the Vagrant multi-machine docs.

Inside the step0-packer-windows-vagrantbox directory start the build for another Windows box (that does not provide a provider config, which wouldn´t work within a Vagrant multimachine setup) with this command:

packer build -var iso_url=14393.0.161119-1705.RS1_REFRESH_SERVER_EVAL_X64FRE_EN-US.ISO -var iso_checksum=70721288bbcdfe3239d8f8c0fae55f1f -var template_url=vagrantfile-windows_2016-multimachine.template -var box_output_prefix=windows_2016_docker_multimachine windows_server_2016_docker.json

Add new Windows 2016 Vagrant box:

vagrant box add --name windows_2016_multimachine

Now switch over to step4-windows-linux-multimachine-vagrant directory. Here´s the Vagrantfile defining our local Cloud infrastructure. It defines 4 machines to show the many possible solutions in a hybrid Docker Swarm containing Windows and Linux boxes: Manager nodes both as Windows and Linux machines and Worker nodes, also both as Windows and Linux machines:

  • masterlinux01
  • workerlinux01
  • masterwindows01
  • workerwindows01


Within a Vagrant multimachine setup, you define your separate machines with the config.vm.define keyword. Inside those define blocks we simply configure our individual machine. Let´s have a look onto the workerlinux:

    # One Worker Node with Linux
    config.vm.define "workerlinux" do |workerlinux| = "ubuntu/trusty64"
        workerlinux.vm.hostname = "workerlinux01"
        workerlinux.ssh.insert_key = false
        # Forwarding the port for Ansible explicitely to not run into Vagrants 'Port Collisions and Correction'
        # see, which would lead to problems with Ansible later "forwarded_port", guest: 22, host: 2232, host_ip: "", id: "ssh"

        # As to &
        # we need to configure a private network, so that our machines are able to talk to one another "private_network", ip: ""

        workerlinux.vm.provider :virtualbox do |virtualbox|
   = "WorkerLinuxUbuntu"
            virtualbox.gui = true
            virtualbox.memory = 2048
            virtualbox.cpus = 2
            virtualbox.customize ["modifyvm", :id, "--ioapic", "on"]
            virtualbox.customize ["modifyvm", :id, "--vram", "16"]

The first configuration statements are usual ones like configuring the Vagrant box to use or the VM´s hostname. But the fowarded port configuration is made explicit, because we need to rely on the exact port later in our Ansible scripts. And this wouldn´t be possible with Vagrants default Port Correction feature. Because you couldn´t use a port on your host machine more then once, Vagrant would automatically set it to a random value - and we weren´t able to access our boxes later with Ansible.

To define and override the SSH port of a preconfigured Vagrant box, we need to know the id which is used to define it in the base box. On Linux boxes this is ssh - and on Windows this is winrm-ssl.

Host-only Network configuration

The next tricky part is the network configuration between the Vagrant boxes. As they need to talk to each other and also to the Host, the so called Host-only networking should be the way to go here (there´s a really good overview in this post, sorry german only). This is easily established using Vagrants Private Networks configuration.

And as we want to access our boxes with a static IP, we leverage the Vagrant configuration around Vagrant private networking. All that´s needed here, is to have such a line inside every Vagrant box definition of our multi-machine setup: "private_network", ip: ""

Same for Windows boxes, Vagrant will tell VirtualBox to create a new separate network (mostly vboxnet1 or similar), put a second virtual network device into every box and assign with the static IP, we configured in our Vagrantfile. That´s pretty much everything, except for Windows Server :)

Network configuration between Vagrant Boxes and the Host

As Ansible is a really nice tool, that let´s you use the same host in multiple groups - and merges the group_vars from all of those according to that one host - it isn´t a good idea to use a structure like that in your inventory file:




And try to use different corresponding group_vars entries... Because, you don´t know, which variables will be present!


Different hostnames


But what if we were able to change the /etc/hosts on our Host machine with every vagrant up? ( That´s possible with the, install it with:

vagrant plugin install vagrant-hostmanager

Current workaround: configure ~/hosts masterlinux01 workerlinux01 masterwindows01 workerwindows01

do a:

vagrant up

Now we´re ready to play. And nevermind, if you want to have a break or your notebook is running hot - just type a vagrant halt. And the whole zoo of machines will be stopped for you :)

Windows Server firewall blocks Ping & later needed Container network traffic

As you may noticed, there´s an extra for Windows Server 2016. Because we want our machines to be accessible from each other, we have to allow the very basic command everybody start´s with: the ping. That one is blocked by the Windows firewall as a default and we have to open that up with the following Powershell command - obviously wrapped inside a Ansible task:

  - name: Allow Ping requests on Windows nodes (which is by default disabled in Windows Server 2016)
    win_shell: "netsh advfirewall firewall add rule name='ICMP Allow incoming V4 echo request' protocol=icmpv4:8,any dir=in action=allow"
    when: inventory_hostname in groups['workerwindows']

Additionally, and this part is mentioned pretty much at the end of the docker docs if you want to fire up a Swarm, the later established routing network needs access to several ports, as the docs state. AND "you need to have the following ports open between the swarm nodes before you enable swarm mode". So we need to do that before even initializing our Swarm!

Prepare Docker engines on all Nodes

working Ansible SSH config: you´ll maybe need to install sshpass (e.g. via brew install (as brew install sshpass won´t work, see

Before (see


Now run the following Ansible playbook to prepare your Nodes with a running Docker Engine:

ansible-playbook -i hostsfile prepare-docker-nodes.yml
Allowing http based local Docker Registries

But as the limitations section states, we can´t follow our old approach to build our Docker images on the Docker host anymore - because that way we were forced to build those images on all the Swarm´s nodes again and again, which leads to heavy overhead. We should therefore build the Applications Docker image only once and push it into a local Docker registry. But before that, we´ll need such a local registry. This topic is also covered in the Docker docs.

The easiest start here is to use a plain http registry. It´s no problem to run this setup in isolated environments, such as your on-premise servers. Just be sure to update to https with TLS certificates, if you´re going into the cloud or if you want to provide your registry to other users outside this Swarm.

So let´s go. First of all, we need to allow our Docker Engines on all our hosts to interact with an http-only Docker Registry. Therefore we create a daemon.json file with the following entry on all of our nodes:

  "insecure-registries" : [""]

The file has to reside in /etc/docker/daemon.json on Linux and on C:\ProgramData\docker\config\daemon.json on Windows.

Initializing a Docker Swarm

ansible-playbook -i hostsfile initialize-docker-swarm.yml

Obtaining the worker join-token from the Windows master node isn´t a big problem with Ansible:

  - name: Obtain worker join-token from Windows master node
    win_shell: "docker swarm join-token worker -q"
    register: token_result
    ignore_errors: yes
    when: inventory_hostname in groups['masterwindows']

But syncing the join-token to the other hosts is a bit tricky, since variables or facts are just defined per Host in Ansible. But there´s help! We only need to use the doc´ info about Magic Variables, and How To Access Information About Other Hosts. First of all we access the return variable token_result from the Windows master Host {{ hostvars['masterwindows01']['token_result'] }} - remember to use the exact Host name here, the group name won´t be enough. The second step is the extraction of the join-token out of the result variable with the help of the set_fact Module. The following two Ansible tasks demonstrate the solution:

  - name: Syncing the worker join-token result to the other hosts
      token_result_host_sync: "{{ hostvars['masterwindows01']['token_result'] }}" #"{{token_result.stdout.splitlines()[0]}}"

  - name: Extracting and saving worker join-token in variable for joining other nodes later
      worker_jointoken: "{{token_result_host_sync.stdout_lines}}"

Providing a Docker Registry

As state already in the previous section, we configured every Docker Engine on every Swarm node to enable http only Docker Registry access. Now let´s start our Docker Swarm Registry Service as mentioned in the docs. BUT: Currently the docs are wrong - we´ve got it fixed already here:

  - name: Specify to run Docker Registry on Linux Manager node
    shell: "docker node update --label-add registry=true masterlinux01"
    ignore_errors: yes
    when: inventory_hostname == "masterlinux01"

  - name: Create directory for later volume bind-mount into the Docker Registry service on Linux Manager node, if it doesn´t exist
      path: /mnt/registry
      state: directory
      mode: 0755
    when: inventory_hostname == "masterlinux01

  - name: Run Docker Registry on Linux Manager node as Docker Swarm service
    shell: "docker service create --name swarm-registry --label registry=true --mount type=bind,src=/mnt/registry,dst=/var/lib/registry -e REGISTRY_HTTP_ADDR= -p 5000:5000 --replicas 1 registry:2"
    ignore_errors: yes
    when: inventory_hostname == "masterlinux01"

As the docs do propose a bind-mount, we have to add type=bind into our --mount configuration parameter. AND: We have to create the directory /mnt/registry beforehand, as the docs about "Give a service access to volumes or bind mounts" are stating. But it seems, that not all the docs are up-to-date here, see

Visualize the Swarm

Docker´s own Swarm visualizer doesn´t look that neat, so I read about a comparison with Portainer: Seems to be way more prettier! And it say´s in it´s GitHub readme: "can be deployed as Linux container or a Windows native container". So let´s integrate it into our setup: &

We therefore integrated Portainer into the initialization process of our Swarm:

- name: Create directory for later volume mount into Portainer service on Linux Manager node, if it doesn´t exist
    path: /mnt/portainer
    state: directory
    mode: 0755
  when: inventory_hostname in groups['linux']

- name: Run Portainer Docker and Docker Swarm Visualizer on Linux Manager node as Swarm service
  shell: "docker service create --name portainer --publish 9000:9000 --constraint 'node.role == manager' --constraint 'node.labels.os==linux' --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock --mount type=bind,src=/mnt/portainer,dst=/data portainer/portainer -H unix:///var/run/docker.sock"
  ignore_errors: yes
  when: inventory_hostname == "masterlinux01"

This will deploy a Portainer instance onto our Linux Manager nodes (masterlinux01, cause we only have one Linux Manager node) and connect it directly to the Swarm.

But there´s one thing, that could lead to frustration: Use a current Browser to access Portainer UI inside your Windows Boxes! It doesn´t work inside the pre-installed IE! Just head to


Checking swarm status

Just do a docker info on one (or all) of the boxes.

Now that we also added Docker labels (like this docker node update --label-add os=linux masterlinux01) to each of our nodes so that we can differentiate the OS dependend services later on and also created a Docker Swarm overlay network with docker network create --driver=overlay mixed_swarm_net, our Ansible playbook should finally give some output like this:

TASK [Swarm initialized...] *****************************************************************************************************************************
skipping: [workerwindows01]
skipping: [masterlinux01]
ok: [masterwindows01] => {
    "msg": [
        "The status of the Swarm now is:", 
            "ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS", 
            "ar2mci5utfwov44x42fihgmtf     masterlinux01       Ready               Active              Reachable", 
            "qdcarj7mzjl37txmsijgrvxt4     workerwindows01     Ready               Active              ", 
            "sqirk9itzxlytf5blteg9no7w *   masterwindows01     Ready               Active              Leader", 
            "vz8ruili76n8fslo2vo35go3b     workerlinux01       Ready               Active              "
skipping: [workerlinux01]

This means that our Docker Swarm cluster is ready for service deployment!

Step 5 - Deploy multiple Spring Boot Apps on mixed-OS Docker Windows- & Linux Swarm with Ansible (step5-deploy-multiple-spring-boot-apps-to-mixed-os-docker-swarm)

As Microsoft states in the Swarm docs, Docker Swarm Services can be easily deployed to the Swarm with the docker service create command and afterwards scaled with docker service scale. There´s a huge amount on configuration parameters you can use with docker service create.

BUT: That approach reminds us of those first days with Docker not using Docker Compose. So it would be really nice to have something like Compose also for our Swarm deployment. And it´s really that simple - just use Compose with Docker Stack Deploy for that :):

"Docker Compose and Docker Swarm aim to have full integration, meaning you can point a Compose app at a Swarm cluster and have it all just work as if you were using a single Docker host."

Back to the concrete docker-compose.yml file. Let´s use the newest 3.3 version here, so that we can leverage the most out of Swarm´s functionality, which is broadened with each Compose (file) version.

The Windows Server 2016 Issue

I really like to have completely comprehensible setups! The problem here is, that our setup based on Windows Server 2016 isn´t going to support access to our deployed applications in the end! Why is that? The problem is all about the unsupported routing mesh!

You may say, hey there´s a workaround: Docker Swarm publish-port mode! As states, there´s an alternative to routing mesh: Docker Swarm´s publish-port mode. With a Docker Stack deploy, this could look like the following in the docker-stack.yml:

      endpoint_mode: dnsrr

But together with setting the endpoint_mode to DNS round-robin (DNSRR) as described in the docs, we also need to alter the exported Ports settings. We need to set it to mode: host, which is only possible with the long syntax in the Docker Stack / Compose file format:

      - target: {{ service.port }}
        published: {{ service.port }}
        protocol: tcp
        mode: host

Otherwise the Docker engine will tell us the following error: Error response from daemon: rpc error: code = 3 desc = EndpointSpec: port published with ingress mode can't be used with dnsrr mode

BUT! the problem with this configuration is, that Traefik won´t support this in the end! It really took me days to find out, that using Windows Server 2016 together with Docker Swarm publish-port mode and endpoint_mode isn´t going to work together with a Loadbalancer like Traefik. And we´ll need one! The following paragraphs will show you, why.

The Alternative: Windows Server 1709 or 1803 (Semi-annual Channel)

Windows Server 2016 is the LTS version of the Windows Server family, which will be supported with smaller Updates - but no real new bleeding edge stuff. And Docker network routing mesh support is introduced first in Windows Server 1709:

Also see my comments there:

Hi Kallie, to use the new features in production at the customer, we need to have access to the new 1709 build of Windows Server 2016. As this post here states, the 1709 build will only be available in the so called “Semi-annual channel”, which is only available for customers if they have the “Software Assurance” package (as this post states

To provide a recommendation for the customer, that is based on a proven and fully automated “infrastructure as code” example, I successfully build a GitHub repo ( with EVERY needed step, beginning with the download of a evaluation copy of Windows Server 2016 from here, going over to a setup with VirtualBox/Vagrant (, Provisioning with Ansible and ( and finally running and scaling Spring Boot apps Dockerized on the Windows Server.

Now the next step is Docker orchestration with Swarm (and later Kubernetes). But with the current version of, the mentioned Docker network routing mesh support isn´t available for us. Is there any chance to update this version in the evalcenter? I know there´s the Insiderprogram, but I doesn´t really help my to have a fully trustable setup where I can prove for everybody, that everything will work.


--> only available in Windows Server 1709:

--> only available in the "Semi-annual channel", which is according to only available with the "Software Assurance" package you have to buy separately to the Server licence

--> only alternative: Windows Insider program:

--> but this isn´t a good start with customers!

Building a Windows Server 1709 or 1803 Vagrant Box with Packer

Now that we need to use Windows Server 1709 or 1803 as a basis, we have to build a new Vagrant Box with Packer. The Packer configuration file windows_server_2016_docker.json is flexible enough to support all three Windows Server variants: 2016, 1709 or 1803.

So let´s build our Windows Server 1803 Vagrant Box. All you need is an ISO like en_windows_server_version_1803_x64_dvd_12063476.iso incl. a matching MD5 checksum, which should be available through the "Software Assurance" package or a MSDN Subscription (if you have one) - or at least at the Windows Insider program. If you have the ISO and MD5 ready, fire up the Packer build:

packer build -var iso_url=en_windows_server_version_1803_x64_dvd_12063476.iso -var iso_checksum=e34b375e0b9438d72e6305f36b125406 -var template_url=vagrantfile-windows_1803-multimachine.template -var box_output_prefix=windows_1803_docker_multimachine windows_server_2016_docker.json

If that is finished, add the new box to your Vagrant installation:

vagrant box add --name windows_1803_docker_multimachine

Be sure to have the latest updates installed! For me, it only worked after the November 2017 culmulative update package, with KB4048955 inside. Otherwise ingress networking mode (deploy: endpoint_mode: vip) DOESN´T WORK!

Switch the base Docker image

There´s another difference to the Standard Windows Server 2016 LTS Docker images: The nanoserver and windowsservercore Images are much smaller! BUT: The nanoserver now misses the Powershell! Well, that´s kind of weird - but it´s kind of like in the Linux world, where you don´t have a bash installed per se, but only sh... But there´s help. Microsoft provides a nanoserver with Powershell on top right on Dockerhub: To pull the correct nanoserver with Powershell, just use:

docker pull microsoft/powershell:nanoserver

But as we use the latest nanoserver:1709 image, we also have to use the suitable 1709er image for powershell:microsoft/powershell:6.0.0-rc-nanoserver-1709` - kind of weird again that its only rc right now, but hey. :)

Now you also have to keep in mind, that you have to use pwsh instead of powershell to enter the Powershell inside a Container:

docker exec -it ContainerID pwsh

Main Playbook build-and-deploy-apps-2-swarm.yml structure

cd into step5-deploy-multiple-spring-boot-apps-to-mixed-os-docker-swarm and run:

ansible-playbook -i hostsfile build-and-deploy-apps-2-swarm.yml

The playbook has 4 main steps:

  - name: 1. Build Linux Apps Docker images on Linux manager node and push to Docker Swarm registry
    include_tasks: prepare-docker-images-linux.yml
    with_items: "{{ }}"
    when: inventory_hostname == "masterlinux01" and item.deploy_target == 'linux'
    tags: buildapps

  - name: 2. Build Windows Apps Docker images on Windows manager node and push to Docker Swarm registry
    include_tasks: prepare-docker-images-windows.yml
    with_items: "{{ }}"
    when: inventory_hostname == "masterwindows01" and item.deploy_target == 'windows'
    tags: buildapps

  - name: 3. Open all published ports of every app on every node for later access from outside the Swarm
    include_tasks: prepare-firewall-app-access.yml
    with_items: "{{ }}"
    tags: firewall

  - name: 4. Deploy the Stack to the Swarm on Windows Manager node
    include_tasks: deploy-services-swarm.yml
    when: inventory_hostname == "masterwindows01"
    tags: deploy

Build Docker images of all Spring Boot apps and push them to Docker Swarm registry

First we need to build all Docker images of all Spring Boot apps (according to which OS they should run on) and push them to Docker Swarm registry. This is done by the prepare-docker-images-linux.yml and the prepare-docker-images-windows.yml. They are pushing the Applications new Docker image into our Swarm registry at the end:

  - name: Push the Docker Container image to the Swarm Registry
    shell: "docker push {{registry_host}}/{{}}:latest"

Open all Apps´ ports on every host!


"You must also open the published port between the swarm nodes and any external resources, such as an external load balancer, that require access to the port."

So we need to open every port of every application on every host! Therefor we use prepare-firewall-app-access.yml, that opens all needed ports in our hybrid swarm:

  - name: Preparing to open...
      msg: "'{{ }}' with port '{{ item.port }}'"

  - name: Open the apps published port on Linux node for later access from outside the Swarm
      rule: allow
      port: "{{ item.port }}"
      proto: tcp
      comment: "{{ }}'s port {{ item.port }}"
    become: true
    when: inventory_hostname in groups['linux']

  - name: Open the apps published port on Windows node for later access from outside the Swarm
      name: "{{ }}'s port {{ item.port }}"
      localport: "{{ item.port }}"
      action: allow
      direction: in
      protocol: tcp
      state: present
      enabled: yes
    when: inventory_hostname in groups['windows']

Deploy the Stack to the Swarm


"A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application."

Think of a Stack as like what Compose is for Docker - grouping multiple Docker Swarm services together with the help of a docker-stack.yml (which looks like a docker-compose.yml file and uses nearly the same syntax (Stack has 'deploy' over Compose)). An example docker-stack.yml looks like:


Don´t try to search for "Docker Stack command reference", just head over to the Docker Compose docs and you should find, what you need: Because Docker Swarm makes use of Docker Compose files, the Swarm capabilities of Stack are only just a section (deploy) in the Compose docs.

We should see our applications in Portainer now:


Accessing Spring Boot applications deployed in the Swarm

Fore more indepth information how Docker Swarm works, have a look at


To get to know, where your App is accessible in the Swarm, there are some commands you can use. On a manager node do a

docker service ls

to see all the deployed Docker Swarm services. It should output something like:


Now pick one of your Services to inspect and do a

docker service ps clearsky_weatherbackend

This should show us, on which node the Swarm manager is running the Docker Swarm task including our App´s container:


It´s workerlinux01 in this example.

If you´re unsure, which Port is mapped to the Docker node workerlinux01, you could run a:

docker service inspect --pretty clearsky_weatherbackend

This should give you more insights into this app, including the mapped Port 30001:


With all this information, you could check out your first Docker Swarm deployed App. Just log into workerlinux01 and call your App, e.g. with a curl http://localhost:30001/swagger-ui.html - as the weatherbackend is usind Springfox together with Swagger to show all of it´s REST endpoints:


As Windows doesn´t support localhost loopback, we have to add one more step, to access an App which is deployed into a Windows native Docker Container: We need to know the Container´s IP:

BUT: We´re not in Docker Engine´s standard mode anymore, we´re in Swarm mode. So the Ports aren´t mapped to the Host we define, but to the Docker Swarm as a hole. How does this work? This is done through Docker Swarm Routing Mesh:

"When you publish a service port, the swarm routing mesh makes the service accessible at the target port on every node regardless if there is a task for the service running on the node."


Test ingress networking

To see, if a Docker Swarm service with ingress networking mode is able to run, fire up a test service:

TODO: use a service, that doesn´t need the other following steps

docker service create --name weathertest --network swarmtest --publish 9099:9099 --endpoint-mode vip

DNS to avoid the Host specification of the HTTP-header

Let´s try vagrant-dns Plugin

vagrant plugin install vagrant-dns

Now configure TLD for masterlinux01 in the Vagrant multi-machine setup:

masterlinux.vm.hostname = "masterlinux01"

masterlinux.dns.tld = "test"

Now register the vagrant-dns server as a resolver:

vagrant dns --install

Now check with scutil --dns, if the resolver is part of your DNS configuration:


resolver #9
  domain   : test
  nameserver[0] :
  port     : 5300
  flags    : Request A records, Request AAAA records
  reach    : 0x00030002 (Reachable,Local Address,Directly Reachable Address)


This looks good! Now try, if you´re able to reach our Vagrant Boxes using our defined domain by typing e.g. dscacheutil -q host -a name foo.masterwindows01.test:

$:step4-windows-linux-multimachine-vagrant-docker-swarm-setup jonashecht$ dscacheutil -q host -a name foo.masterwindows01.test
name: foo.masterwindows01.test

$:step4-windows-linux-multimachine-vagrant-docker-swarm-setup jonashecht$ dscacheutil -q host -a name foo.masterlinux01.test
name: foo.masterlinux01.test

$:step4-windows-linux-multimachine-vagrant-docker-swarm-setup jonashecht$ dscacheutil -q host -a name bar.workerlinux01.test
name: bar.workerlinux01.test

$:step4-windows-linux-multimachine-vagrant-docker-swarm-setup jonashecht$ dscacheutil -q host -a name foobar.workerwindows01.test
name: foobar.workerwindows01.test

But as the vagrant-dns Plugin doesn´t support propagating the host´s DNS resolver to the Vagrant Boxes, we soon are running into problems - because Traefik couldn´t route any request anymore. But luckily we have VirtualBox as a virtualization provider for Vagrant, which supports the propagation of the host´s DNS resolver to the guest machines. All we have to do, is to use this suggestion on serverfault:, which will 'Using the host's resolver as a DNS proxy in NAT mode':

# Forward DNS resolver from host (vagrant dns) to box
virtualbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]

After we configured that, we can do our well-known vagrant up.

Now we should be able to do this:

curl weatherbockend.masterlinux01.test:80 -v

We´re using port 80 here, because masterlinux01.test directly resolves to - which is the named box :) And as Traefik is waiting for requests on port 80, this should work.

Or go to your Browser and simply try out all possible urls! Here are a few:


Fixing 'mount -t vboxsf ... No such device' errors because of old VirtualBox additions in VagrantBoxes

vagrant plugin install vagrant-vbguest

Using Traefik to access Spring Boot Apps

Docker Stack deploy for Apps provided by Traefik

If you now access http://localhost:48080/, you should see the Traefik dashboard with all the Services deployed:


Therefore the Vagrantfile has some more port forwardings prepared:

        # Forwarding the Guest to Host ports, so that we can access it easily from outside the VM "forwarded_port", guest: 8080, host: 48081, host_ip: "", id: "traefik_dashboard" "forwarded_port", guest: 80, host: 40081, host_ip: "", id: "traefik"

The Apps are templated over the docker-stack.yml:


{% for service in %}
  {{ }}:
    image: {{registry_host}}/{{ }}
      - target: {{ service.port }}
        published: {{ service.port }}
        protocol: tcp
      endpoint_mode: vip
      replicas: {{ service.replicas }}
{% if service.deploy_target == 'windows' %}
        constraints: [node.labels.os==windows]
{% else %}
        constraints: [node.labels.os==linux]
{% endif %}
        - "traefik.port={{ service.port }}"
        - "{{ swarm_network_name }}"
        - "traefik.backend={{ }}"
# Use Traefik healthcheck        "traefik.backend.healthcheck.path": "/healthcheck",
        - "traefik.frontend.rule=Host:{{ }}.{{ docker_domain }}"

Note that the traefik.port=YourAppPort must be the same port, that your Spring Boot application uses (via server.port=YourAppPort) and your Container exposes. Then Traefik will automatically route a Request through to the App over the configured first published Port:

      - target: 80
        published: 80
        protocol: tcp
        mode: host

Finally the first curls are working:

curl -H http://localhost:40080 -v


As we also added a port forwarding configuration for every app in our Vagrantfile:

        # Open App ports for access from outside the VM "forwarded_port", guest: 8761, host: 8761, host_ip: "", id: "eureka" "forwarded_port", guest: 8090, host: 8090, host_ip: "", id: "weatherbackend" "forwarded_port", guest: 8091, host: 8091, host_ip: "", id: "weatherbockend" "forwarded_port", guest: 8095, host: 8095, host_ip: "", id: "weatherservice"

, we should now be able to access every app from our Vagrant/VirtualBox host:


Now we should check, if the containers are able to reach themselfes. So for example we could try to reach a Windows Container from within the scope of an Linux Container from masterlinux01:

docker exec -it e71 ping weatherservice

Let´s have a look onto all containers and services in the network. Therefore you MUST use the full network name, the id isn´t giving you the full output of everything in the Cluster! (as states, you need --verbose to see all data from all nodes!)

docker network inspect --verbose clearsky_mixed_swarm_net

--> Let´s try another Baseimage and switch to with microsoft/windowsservercore:1709

Test via Traefik:

curl -H http://localhost:40080 -v

And IT WORKS!!!:


Also all the example apps (cxf-spring-cloud-netflix-docker) will call themselfes if you call the weatherservice with SoapUI for example:


The really use Eureka & Feign to call each other:



General comparison of Docker Container Orchestrators


Windows Server

Windows Server Pre-Release (Insider):

Current state discription: --> coming version 1709 of Windows Server 2016 will have better Kubernetes support with no more manual tinkering with routing tables (better HNS)

--> LinuxKit in Hyper-V for Side-by-Side Windows and Linux deployments:

What´s new in 1803:

Docker Swarm

Docker Swarm Windows Docs:

Windows Server 2016 Overlay Networking Support (Windows & Linux mixed mode):

Windows & Linux mixed Video:

docker service create CLI reference:

Docker network routing mesh support in Windows Server 2016 1709: &

Autoscaler for Docker Swarm:


Docker Windows Containers & Kubernetes:

Kubernetes Networking on Windows:

minikube howto:!forum/kubernetes-sig-windows

Spring Application Deployment

Spring Cloud kubernetes

Shutdown hooks

Deployment example with Spring Cloud

Windows JDK Docker images:


Windows without working routing mesh: need publish-port mode (dnsrr) at it´s services, then Traefik has a problem:

Zero downtime deployment

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
ruby (12,901
docker (2,923
powershell (1,353
spring-boot (773
devops (584
ansible (428
docker-compose (398
vagrant (152
docker-container (115
packer (69
docker-swarm (58
docker-registry (42
chocolatey (17
spring-cloud-netflix (16