Deploying a Software Factory to GCP with Kubernetes

Docker is a very popular facility for simplifying the deployment of services to a Linux box. It allows to pack all the required resources to run a service (often a micro-service) into a file system image, that can be run inside a container. Multiple containers can run in the same physical (or virtual) box, each one being isolated from the others. Configuring Linux for running “dockerised” services is an easy task and actually I've been running some of my services on self-maintained servers (with a classic hosting model) for a few years.

But one of the interesting points of Docker is that a number of providers are capable of directly accepting an image and running a container out of it, providing also to the system owner the resources for managing it. Since people is encouraged to deploy single purpose, self-contained services, just one inside each container, the problem arise about having multiple services deployed and managed together. Kubernetes is a cluster manager which is able to run multiple Docker containers and manage, maintain and scale them on a server farm. The Google Cloud Platform offers Kubernetes as part of its services.

This post is the first in a series of exercises to demonstrate how a simple Software Factory, based on open source (Jenkins, Sonaype Nexus) and popular (Jira) tools, can be deployed to the GCP. Let's start with Jenkins and Nexus 3, because the process is further simplified as their producers also make available some pre-configured Docker images capable to run in minutes.

Prerequisites

First, you need to have a local platform to perform the development. While Docker and Google Cloud SDK are native on Linux, and there are installers for Mac OS X and Windows that trasparently use a VirtualBox Linux guest, I suggest to explicitly create a Linux VM and work inside it. This is to avoid polluting your laptop with lots of stuff.

Then, you have to install Docker and Google Cloud SDK. For the latter, make sure you also install the kubectl too.

You also need to have an account at Google and sign in at the Google Cloud Platform. The use of GCP is subject to billing, even though every new user gets a 300 € bonus usable for 60 days. Nevertheless, you have to create a billing account and associate it to GCP.

Google says that, when the bonus expires, billing won't happen automatically. Now I'm from Genoa and they say we're stingy, but we are just careful about money... so I'm strongly advising you to keep an eye on billing and have a look at the pricing policies of GCP.

Creating the project

At this point you're ready to create a project. My idea is to create a global project for Nexus, Jenkins and later Jira.

A single project on GCP can contains multiple, independently configured Kubernetes clusters. A Kubernetes cluster is made of a Master API server and a set of worker VMs called nodes, managed by the master server. A pod is a group of Docker containers, tied together for the purposes of administration and networking. In this exercise a single-container-pod is associated to each service (Jenkins, Nexus).

So, I selected the menu "Create Project" from the GCP dashboard and filled in the following fields:

  • Project Name: tidalwave-services
  • Advanced Options / App Engine Region: europe-west

Note that I'm applying some meaningful settings for me, operating from Italy. If you are from a different region of the world, evaluate other regions.

Then I pushed "Create" and took note of the project id that has been assigned: it's the project name plus an optional numeric suffix to disambiguate other projects with the same name that can be already present. My assigned project id was the same as the name, tidalwave-services, but it could have been something such as tidalwave-services-154394.

I connected to the Kubernetes console and enabled the Container Engine API. I was offered a list of projects to enable, and I selected tidalwave-services. I also enabled billing on it.

At this point it is possible to start working with the CLI. The first operation I performed was to set the project name, that will be remembered permanently. Recall that if you have multiple projects to work on, each time you switch to a different one you have to reset the project name.

$ gcloud config set project tidalwave-services

Submitting the Docker image

With the usual docker pull command I downloaded from the Docker Hub the image of Jenkins 2.19.3:

$ docker pull jenkins:2.19.3
...

In order to prepare it for the push to GCP, I associated it with a new tag:

$ docker tag jenkins:2.19.3 eu.gcr.io/tidalwave-services/jenkins

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE docker-whale latest b5267e720714 21 hours ago 275 MB eu.gcr.io/tidalwave-services/jenkins latest 82da23ec134d 3 days ago 714.1 MB jenkins 2.19.3 82da23ec134d 3 days ago 714.1 MB jenkins latest 82da23ec134d 3 days ago 714.1 MB hello-world latest c54a2cc56cbb 4 months ago 1.848 kB docker/whalesay latest 6b362a9f73eb 18 months ago 247 MB

The schema for the new tag is mandatory: first a label related to the App Engine Region (eu.gcr.io in my case), then the name of the project (tidalwave-services) and finally the name of the image that will be pushed (jenkins).

The next step was to push the image to GCP, referring to the freshly created tag:

$ gcloud docker push -- eu.gcr.io/tidalwave-services/jenkins
The push refers to a repository [eu.gcr.io/tidalwave-services/jenkins] 17ba07a80d57: Image successfully pushed 66bfa7351ff7: Image successfully pushed 1b3c2ec0befc: Image successfully pushed 23fa8153d8ff: Image successfully pushed b0f5f01d9edc: Image successfully pushed 590f8a156135: Image successfully pushed d458e9b86b04: Image successfully pushed ca787184f0ab: Image successfully pushed 4238f6371816: Image successfully pushed a2eea3e16ec7: Image successfully pushed 1f764d32a220: Image successfully pushed 1af14ac896ef: Image successfully pushed a7afeb77f416: Image successfully pushed cef349a9d76f: Image successfully pushed 1d16eb83eef5: Image successfully pushed dfe1af64a72d: Image successfully pushed 9f17712cba0b: Image successfully pushed 223c0d04a137: Image successfully pushed fe4c16cbf7a4: Image successfully pushed latest: digest: sha256:6f0201028c5d3636973cb7fdc52a97f9d18e2b26a1ba4aab68e0ef49961fc4ca size: 25232

Docker images are usually large (hundreds of megabytes) so this operation might take a long time depending on your bandwidth. Since images are layered, each layer is pushed separately. If the connection goes down at a certain point, you can repeat the operation and layers that were succesfully pushed don't need to be sent again.

After the operation was completed, it was possible to confirm with the web console (Compute > Container Engine > Container Registry) that the image reached the destination.

Creating a cluster

At this point it was possible to create a cluster:

$ gcloud config set compute/zone europe-west1-b
Updated property [compute/zone].
$ gcloud container clusters create tidalwave-services
Creating cluster tidalwave-services...done. Created [https://container.googleapis.com/v1/projects/tidalwave-services/zones/europe-west1-b/clusters/tidalwave-services]. kubeconfig entry generated for tidalwave-services. NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS tidalwave-services europe-west1-b 1.4.6 104.155.34.229 n1-standard-1 1.4.6 3 RUNNING
$ gcloud container clusters get-credentials tidalwave-services
Fetching cluster endpoint and auth data. kubeconfig entry generated for tidalwave-services.

Again, it was necessary to specify the App Engine Region: one must pay attention to be consistent with previous settings. The operation took a few seconds, then it was possible to confirm it again with the web console.

Creating a pod for Jenkins

Before proceeding it was necessary to authenticate again. The command line opens Firefox that navigates to an authentication page:

$ gcloud auth application-default login
Your browser has been opened to visit: 
 https://accounts.google.com/o/oauth2/auth?redirect_uri=...

It was then the turn of the kubectl run command. It created a pod specifying the image to run and the port to expose. The default port used by the Jenkins Docker image is 8080.

$ kubectl run jenkins-node --image=eu.gcr.io/tidalwave-services/jenkins --port=8080
deployment "jenkins-node" created

A number of commands enable the user to see what's going on. kubectl get deployments shows the complete list of deployments in the current project:

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE jenkins-node 1 1 1 0 12s

kubectl get pods show the configured pods. Since a freshly created pod requires a few seconds to go in the Running status, the command was useful for me to wait and confirm that the previous operation was correctly completed:

$ kubectl get pods
                    NAME                          READY STATUS            RESTARTS AGE
                    jenkins-node-2548356088-1mbnr 0/1   ContainerCreating 0        20s
                    $ kubectl get pods
                    NAME                          READY STATUS            RESTARTS AGE
                    jenkins-node-2548356088-1mbnr 0/1   ContainerCreating 0        33s
                    $ kubectl get pods
                    NAME                          READY STATUS            RESTARTS AGE
                    jenkins-node-2548356088-1mbnr 1/1   Running           0        46s

In this phase it is advisable to check for error statuses (not getting to Running), that might be related to incorrect settings, for instance concerning the image name.

kubectl logs allow to see the container log of the specified pod. I could check that Jenkins was going through the boot phase, and it was also important to see the temporary password that was created for the first access:

$ kubectl logs jenkins-node-2548356088-1mbnr
                    Running from: /usr/share/jenkins/jenkins.war
                    webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
                    Nov 20, 2016 12:20:23 PM Main deleteWinstoneTempContents
                    WARNING: Failed to delete the temporary Winstone file /tmp/winstone/jenkins.war
                    Nov 20, 2016 12:20:23 PM org.eclipse.jetty.util.log.JavaUtilLog info
                    INFO: Logging initialized @610ms
                    Nov 20, 2016 12:20:23 PM winstone.Logger logInternal
                    INFO: Beginning extraction from war file
                    …
                    Nov 20, 2016 12:20:33 PM jenkins.install.SetupWizard init
                    INFO: 
                    

                    *************************************************************
                    *************************************************************
                    *************************************************************
                    

                    Jenkins initial setup is required. An admin user has been created and a password generated.
                    Please use the following password to proceed to installation:
                    

                    455eac5bcfeabc5f5e5485720acb34df
                    

                    This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
                    

                    *************************************************************
                    *************************************************************
                    *************************************************************
                    

                    Nov 20, 2016 12:20:37 PM hudson.model.UpdateSite updateData
                    INFO: Obtained the latest update center data file for UpdateSource default
                    Nov 20, 2016 12:20:38 PM hudson.model.DownloadService$Downloadable load
                    

It was possible to retrieve the URLs of the main components of a cluster by running the kubectl cluster-info command:

$ kubectl cluster-info
Kubernetes master is running at https://104.155.34.229 GLBCDefaultBackend is running at https://104.155.34.229/api/v1/proxy/namespaces/kube-system/services/default-http-backend Heapster is running at https://104.155.34.229/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at https://104.155.34.229/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://104.155.34.229/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard 
 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Further very detailed diagnostics were available by means of the kubectl get events command:

$ kubectl get events
<long list of events>

The service was not accessible from the internet yet. In fact, it needed to be exposed with a specific command:

$ kubectl expose deployment jenkins-node --port=8080 --type=LoadBalancer
service "jenkins-node" exposed

An internal IP was assigned, an address valid only inside the cluster. The "LoadBalancer" type also asks GCP for assigning an external IP, that might take some seconds to be available. The command kubectl get services was used for a short time to have confirmation of the assignment of the external IP:

$ kubectl get services jenkins-node
                    NAME         CLUSTER-IP   EXTERNAL-IP     PORT(S)  AGE
                    jenkins-node 10.3.247.208 <pending>       8080/TCP 14s
                    $ kubectl get services jenkins-node
                    NAME         CLUSTER-IP   EXTERNAL-IP     PORT(S)  AGE
                    jenkins-node 10.3.247.208 <pending>       8080/TCP 17s
                    $ kubectl get services jenkins-node
                    NAME         CLUSTER-IP   EXTERNAL-IP     PORT(S)  AGE
                    jenkins-node 10.3.247.208 146.148.124.188 8080/TCP 2m
                    

At this point the service was accessible from my browser at the URL http://146.148.124.188:8080. Jenkins started up with a temporary password that was previously shown in the container log; then a “first setup” wizard was shown, to finish the initial configuration. Unfortunately, a Jenkins bug prevented me from going on, and I'll take care of it later.

Creating another pod for Nexus

Now, the second round for deploying Nexus. Apart from the initial setup, it was just the matter of repeating some operations for the new image:

$ docker pull sonatype/nexus3:latest
                    ...

$ docker tag sonatype/nexus3:latest eu.gcr.io/tidalwave-services/nexus3

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE docker-whale latest b5267e720714 21 hours ago 275 MB eu.gcr.io/tidalwave-services/jenkins latest 82da23ec134d 3 days ago 714.1 MB jenkins 2.19.3 82da23ec134d 3 days ago 714.1 MB jenkins latest 82da23ec134d 3 days ago 714.1 MB
eu.gcr.io/tidalwave-services/nexus3 latest 71cd1bebf84e 2 weeks ago 463.4 MB sonatype/nexus3 latest 71cd1bebf84e 2 weeks ago 463.4 MB hello-world latest c54a2cc56cbb 4 months ago 1.848 kB docker/whalesay latest 6b362a9f73eb 18 months ago 247 MB $ gcloud docker push -- eu.gcr.io/tidalwave-services/nexus3
Using 'push eu.gcr.io/tidalwave-services/nexus3' for DOCKER_ARGS. The push refers to a repository [eu.gcr.io/tidalwave-services/nexus3] 5f70bf18a086: Image successfully pushed d9f2054d4635: Image successfully pushed 5ed7f029a2bc: Image successfully pushed 211d5d4fa515: Image successfully pushed 4ff7ff87a036: Image successfully pushed 4aff2ef3d9b3: Image successfully pushed 0aeb287b1ba9: Image successfully pushed latest: digest: sha256:da6f5951a190545eede8d371eb8ecefdf58dd7bbc7d10042591c13c87199760d size: 13914
$ kubectl run nexus3-node --image=eu.gcr.io/tidalwave-services/nexus3 --port=8081
deployment "nexus3-node" created
$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE jenkins-node 1 1 1 1 10m nexus3-node 1 1 1 0 23s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE jenkins-node-1095617336-9mnvy 1/1 Running 0 32m nexus3-node-3977759125-ylnrx 1/1 Running 0 23m tidalwave-services-node-2548356088-1mbnr 1/1 Running 0 1h
$ kubectl expose deployment nexus3-node --port=8081 --type=LoadBalancer
service "nexus3-node" exposed
$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins-node 10.3.248.148 146.148.124.188 8080/TCP 7m kubernetes 10.3.240.1 <none> 443/TCP 1h nexus3-node 10.3.249.15 104.199.48.209 8081/TCP 55m

And so also Nexus was available after a few seconds at the URL http://104.199.48.209:8081.

In case of mistakes

During the sequence, I made some errors (mistyping some names). Once the Docker images have been pushed to GCP, it's easy and quick to delete the pieces of infrastructure just created and re-create them. The relevant command to delete pieces are:

$ kubectl delete jenkins-node-1095617336-9mnvy
pod "jenkins-node-1095617336-9mnvy" deleted
$ kubectl delete service jenkins-node
service "jenkins-node" deleted

$ gcloud container clusters delete tidalwave-services
The following clusters will be deleted. - [tidalwave-services] in [europe-west1-b] 
 Do you want to continue (Y/n)? y 
 Deleting cluster tidalwave-services...done.

Stopping and restarting services

While a kubectl stop command exists, it has been deprecated. Actually, stopping something means to delete it. Deletion, in fact, performs by default a gracefully shutdown. It is possibly to later restart a service by re-creating it.

But there is another approach: the kubectl scale deployment command is used to control the number of instances of a deployed service. The purpose is scalability (something that is out of the scope of the current post), but it can be used to set to zero the number of instances (which equates to remove the related pods):

$ kubectl scale deployment jenkins-node --replicas=0
deployment "jenkins-node" scaled
$ kubectl scale deployment nexus3-node --replicas=0
deployment "nexus3-node" scaled
$ kubectl get services,deployment,pods
NAME               CLUSTER-IP     EXTERNAL-IP       PORT(S)    AGE
svc/jenkins-node   10.3.250.98    146.148.124.188   8080/TCP   5m
svc/kubernetes     10.3.240.1     <none>            443/TCP    8h
svc/nexus3-node    10.3.254.214   104.199.48.209    8081/TCP   4m
NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/jenkins-node   0         0         0            0           6m
deploy/nexus3-node    0         0         0            0           5

While the External IPs still exist, nothing responds when you try to connect to it. Note that while it is possible to delete a pod, it is not the correct way to remove the availability of the related service: as long as the desired instance count is 1, Kubernetes will restart a new pod in its place.

In the same way, it is possible to restore the deployment instances:

$ kubectl scale deployment jenkins-node --replicas=1
deployment "jenkins-node" scaled
$ kubectl scale deployment nexus3-node --replicas=1
deployment "nexus3-node" scaled
$ kubectl get services,deployment,pods NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/jenkins-node 10.3.250.98 146.148.124.188 8080/TCP 12m svc/kubernetes 10.3.240.1 <none> 443/TCP 8h svc/nexus3-node 10.3.254.214 104.199.48.209 8081/TCP 11m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/jenkins-node 1 1 1 1 13m deploy/nexus3-node 1 1 1 1 12m NAME READY STATUS RESTARTS AGE po/jenkins-node-1095617336-9mck0 1/1 Running 0 10s po/nexus3-node-3977759125-mahzq 1/1 Running 0 4s

Conclusion

Two services are up and running, and it's enough for the first step. They are still unusable: in fact, they run off a single Docker container, whose file system is not persistent. Any change due to using or configuring them will be lost at the next restart, resetting them to the status of a fresh installation. One of the first things to do at the next step is to provide them with a persistent file storage.

Comments are managed by Disqus, which makes use of a few cookies. Please read their cookie policy for more details. If you agree to that policy, please click on the button below to accept it. If you don't, you can still enjoy this site without using Disqus cookies, but you won't be able to see and post comments. Thanks.