Caddy Web Server in Google Container engine

A simple web server provides HTTP/2 and automatic HTTPS, programmed in Go language (Golang). Setup is straight ahead, and binaries are available for all the major operating systems.

There is a Wiki docs section on Github page for running the Caddy web server for containers. Wiki intuited to get hands-on dirty with Kubernetes (Container management and orchestration). We decided to run it in GKE (Google container engine) on the Google cloud platform.First of all, Containers can be set up in Container VM’s (Google Compute Engine) or Container Clusters Google container engine. Networking feature in the Google cloud platform plays a significant role for containers to manage communication (ingress and egress) between containers, virtual machine and to the external world.

Since we are utilizing abiosoft/caddy container, includes caddy web server and few add-ons which run on TCP port 2015. GCP requires firewall changes in the networking feature to allow any IP address for the TCP port 2015.


Caddy web server in Container VM.

The GCP Deployment Manager is similar to AWS Cloudformation that automates container setup in Debian VM instance. We use the Jinja template engine for Python in Deployment Manager.

Especially we have extracted and customized the Jinja templates from the sample code. Download or clone Caddy repository from Github page and zone must be changed based on the region (caddy/container_vm/jinja/container_vm.yaml).  

Activate CloudShell in the GCP console and execute gcloud deployment-manger deployments create caddy-server –config container_vm.yaml.

Caddy container is live inside Container VM, also accessible via public IP of the container VM.

Caddy server in GKE container clusters (Google Container Engine).

We decided to host Caddy container by mounting the data volume in a Persistent disk and setting up Container cluster using the Jinja template in Deployment Manager. Make sure Caddy repository  is cloned and change the Zone value (cluster.yaml) according to the region (caddy/gke-caddy/cluster.yaml)

In CloudShell execute gcloud deployment-manager deployments create cluster –config cluster.yaml to set up a four-node cluster with 10 GB of a persistent disk. Deployment manager updates status once completed.

Cluster template will create 10 GB of a persistent disk (data disk-cluster), but it is not attached to any VM or pods.

Detailed docs of gke-caddy Kubernetes resources is on GitHub page. Replicated service file caddy/gke-caddy/replicatedservice.yaml will configure service and replication controller.

Execute gcloud deployment-manager deployments create rs –config replicatedservice.yaml in CloudShell. The Persistent disk of 10 GB is automatically attached to a VM instance, which serves as the data volume to a caddy web server.

Verifying deployment.

Execute gcloud container clusters get-credentials  cluster-caddy –zone us-central1-b in CloudShell. Change the zone and the cluster name accordingly to fetch credentials.
kubectl get pods
kubectl get services
Also, the above two commands display pods and services created for our caddy server.

Constraint: Since only one pod mounted to a persistent disk, we tried creating another replication controller with same data volume (persistent disk). Replication controller deployed, but the pods were not running.

Therefore, NFS, GlusterFS, Flocker, Cephfs or ISCSI storages can help in sharing a persistent disk to many pods. We are still doing an experiment on it. If you have any queries, please feel free to comment 🙂