These containers mount a persistent volume for sites (which change after build and deployment time with user generated content) and connect to MySQL, Redis, and Memcached services from the IBM Cloud catalog (not self-hosted containers inside the same cluster).
After deployment, Drupal developers (who are the end users of the cluster) can manage site lifecycle by delivering configuration or code changes to specific folders (config, code) in this repository. Commits trigger fresh rebuild and deploys in an IBM Continuous Delivery pipeline. Production data can also be synchronized back to the staging environment using file and data migration scripts.
There are two separate Drupal installations that are deployed onto the container cluster. One to represent a "staging" environment and one to represent a "production" environment. Each has its own dedicated services and volume mounts. A CLI container can run
drush or scripts such as
transfer-data.sh on those environments to synchronize them.
When custom code is checked into this repository, it triggers a Docker image build which stores the images into a private container registry that analyzes the security of the images. These images are then rolled out across the Kubernetes cluster through staging and production gates. Data from production can be synchronized back to staging to ensure the environments are as close as possible.
See the Container Service Kubernetes and IBM Cloud services (MySQL, Redis, Memcached) configuration instructions.
See the Docker container build and Kubernetes deployment instructions.
See the ongoing development instructions. And the work in progress DevOps pipeline docs. This shows how container images are rebuilt and how to address security issues detected by the IBM Vulnerability Advisor.
There are two synchronization scripts that can be invoked to bring user generated changes to files or data from production back into the staging environment. You can also execute other scripts inside the containers as well.