The 5 Minutes Kubernetes Setup

We really like Kuberenetes and believe it is a great system to run Docker containers. And today version 1.0 was released. A good opportunity to spin up a minimal demo cluster to give it a try and get familiar with Kubernetes.

Kubernetes Setup

The official repository offers myriad ways to install Kubernetes, but we choose the one which makes most sense in a container-centric world: running it as Docker containers. And by the way, it's the easiest installation, too! It will only take us a couple of minutes.

We have two prerequisites: Docker and Docker Compose. In case you don't have them already installed, you find the instructions in the Docker and Compose docs. Now, let's move on with the Kubernetes setup.

A Kubernetes cluster is composed of multiple components:

  • etcd, a key/value store used as a single source of truth for the cluster
  • kubelet: the agent running on every node to start/stop containers
  • apiserver: provides the REST API and hence the frontend to the cluster
  • controller-manager: runs a control loop to bring the cluster to the desired state
  • scheduler: decides on which node (kubelet) a container runs
  • proxy: runs on every node and exposes services with a virtual IP address

All of these components need to be started and configured. The easiest way is to use the following docker-compose.yml file:

etcd:  
  image: gcr.io/google_containers/etcd:2.0.9
  net: host
  command: ['/usr/local/bin/etcd', '--bind-addr=0.0.0.0:4001', '--data-dir=/var/etcd/data']

apiserver:  
  image: geku/hyperkube:v1.0.1
  net: host
  command: ["/hyperkube", "apiserver", "--service-cluster-ip-range=172.17.17.1/24", "--address=127.0.0.1", "--etcd_servers=http://127.0.0.1:4001", "--cluster_name=kubernetes", "--v=2"]

controller:  
  image: geku/hyperkube:v1.0.1
  net: host
  command: ["/hyperkube", "controller-manager", "--master=127.0.0.1:8080", "--v=2"]

scheduler:  
  image: geku/hyperkube:v1.0.1
  net: host
  command: ["/hyperkube", "scheduler", "--master=127.0.0.1:8080", "--v=2"]

kubelet:  
  image: geku/hyperkube:v1.0.1
  net: host
  command: ['/hyperkube', 'kubelet', '--api_servers=http://127.0.0.1:8080', '--v=2', '--address=0.0.0.0', '--enable_server']
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock

proxy:  
  image: geku/hyperkube:v1.0.1
  net: host
  command: ['/hyperkube', 'proxy', '--master=http://127.0.0.1:8080', '--v=2']
  privileged: true

Create the file or checkout our demo on Github. Afterwards you can start your demo cluster with a single command:

docker-compose up  

That's it! The command starts all components as Docker containers and displays a ton of log output. Our demo cluster is running and we can start our first service.

Run a Service

In order to use the command line client kubectl without any configuration, we need to access the API on localhost. And on OSX we can achieve this by exposing the port 8080 with an SSH tunnel:

boot2docker ssh -L 8080:localhost:8080  

Then we can download kubectl for OSX (or Linux) and run it (please ensure it is within your PATH). First let's list our nodes:

$ kubectl get nodes
NAME        LABELS                             STATUS  
127.0.0.1   kubernetes.io/hostname=127.0.0.1   Ready  

Next, we can start a simple demo service, scale it to multiple instances and list its running pods:

# Create demo service
kubectl run service-demo --image=geku/go-app:0.1 --port=5000  
kubectl get pods -l run=service-demo

# Scale service to 3 instances
kubectl scale rc service-demo --replicas=3  
kubectl get pods -l run=service-demo  

The first command is acutally a shortcut to create a ReplicationController which ensures we have the desired number of replicas running. With the last command we should see that Kubernetes started 3 instances of our service. We can list the ReplicationControllers too:

kubectl get rc  

So, after having a service running, we would like to access it. But this is not possible yet because the ports of our service instances are not exposed and only accessible on our boot2docker virtual machine. Let's expose our service on port 80:

kubectl expose rc service-demo --port=80 --target-port=5000 --type=NodePort  

This creates a load balancer and assigns our service a virtual IP where we can reach a random instance of our service. Additionally it maps it to a random port on our host server. To get the port run:

$ kubectl get -o yaml service/service-demo | grep nodePort
    nodePort: 31538

In our case we can reach our service on port 31538. This might be different on your machine and you need to change it in the following command.

$ curl $(boot2docker ip):31538/json
{"hostname":"service-demo-mltul","env":["PATH=/usr/loc...

$ curl $(boot2docker ip):31538/json
{"hostname":"service-demo-gh4ej","env":["PATH=/usr/lo

By sending multiple requests you can see that they are answered by different instances (varying hostname).

To remove all pods and the service you run:

kubectl delete service/service-demo  
kubectl delete rc/service-demo  

Final Words

Congratulations, you have successfully created your first Kubernetes cluster and run a service on it.

Please join our beta list if you are interested in how CloudGear can support you with Kubernetes. We will keep you informed about new blog posts and the state of our beta.

Cover-photo by El Coleccionista de Instantes Fotografía & Video licensed under CC BY-SA 2.0.