Installing Kubernetes 1.13 on CentOS 7

In this post I will try to describe a Kubernetes 3.11 test install on our servers. Its main purpose is to allow the team to have more in-depth knowledge of Kubernetes and its building blocks as we are currently implementing OpenShift Origin and Amazon EKS. The goal is to implement something along the lines of:

Kubernetes cluster


Install 3 Centos 7.6 servers (can be virtual machines) with the following requirements (please beware that for a production cluster, the requirements should be pumped up):

  • 2 vCPUs at least
  • 4 GB Ram for the master
  • 10 GB Ram for each of the worker nodes
  • 30 GB root disk (I will in a later post address some of the “hyper-converged” solutions – storage & compute – and in that scenario, more than one disk is advised)

Next, set the network configuration on those Linux servers to match the above diagram (make sure that the hostname is set correctly).

1 – Set named based communication

All servers need to be able to resolve the name of the other nodes. That can be achieved by adding them to the DNS server zone or by adding the information to /etc/hosts on all servers.

# vi /etc/hosts (on all 3 nodes) kube-master.install.etux master kube-node1.install.etux node1 kube-node2.install.etux node2

2 – Disable selinux and swap

Yeah, yeah.. when someone disables selinux a kitten dies. Nevertheless, this is for demos and testing. Make sure that the following commands are executed on all 3 nodes.

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
swapoff -a
sed -i '/swap/s/^/#/g' /etc/fstab

3 – Enable br_netfilter

Depending on the network overlay that you will be using, the next should or shouldn’t be applied. When choosing Flannel, please run on all nodes:

modprobe br_netfilter
cat >> /etc/sysctl.conf <<EOF
sysctl -p

4 – Install docker-ce

Docker CE has a few interesting features that the version of docker that comes with Centos7 doesn’t have. One of those is the multi-stage docker build. For that reason, I chose to use docker-ce running the following on all nodes:

yum install -y yum-utils
yum-config-manager --add-repo
yum install -y docker-ce device-mapper-persistent-data lvm2
systemctl enable --now docker

5 – Install Kubernetes

The kubernetes packages I used are in the projects’ repo. Please run the following commands on all nodes:

cat > /etc/yum.repos.d/kubernetes.repo <<EOF

yum install -y kubelet kubeadm kubectl

6 – Reboot all instances

Reboot all 3 instances to make sure that they all are on the same state.

7 – Initialize Kubernetes

The first step is to initialize the master node with all the services required to bootstrap the cluster. This step should be done on the master node only. Make sure that the pod network cidr is a /16 (each node will “own” a /24) and it doesn’t conflict with any other network you own:

# on master node only
kubeadm init --apiserver-advertise-address= --pod-network-cidr=
Output of running kubeadm

Before running the kubeadm joinon the other nodes, I first need to create the network overlay.

8 – Network overlay – Flannel

Wait a few seconds and proceed to the network overlay. There are several network plugins available to install. Some of them are:

Why did I choose Flannel? Well.. because I think that it was the easiest to install. I’m still assessing its features and comparing them to the other options to see which one is better suited to our customers.

# only on master
kubectl apply -f                                                                                                           

9 – Join other nodes to the Kubernetes cluster

Next, go to the node1 and node2 and run the kubeadm command that was displayed on the output you got:

# On node1
kubeadm join --token 9yd8mu.w4invwht9c8gappw --discovery-token-ca-cert-hash sha256:dd0e7d4ee7e60577f923bb3abf7658ea16db018684aa283fb0ebae2ec14154d9

# On node2
kubeadm join --token 9yd8mu.w4invwht9c8gappw --discovery-token-ca-cert-hash sha256:dd0e7d4ee7e60577f923bb3abf7658ea16db018684aa283fb0ebae2ec14154d9

In the end…

In the end you should have a running cluster with some pods running:

Next steps

The next steps on this demo cluster is to install a local registry, the web console and a loadbalancer (MetalLB) but that will be for another blog post.

CI/CD in OpenShift with Gitlab and Terraform

We’re always searching new ways of implementing CI/CD at Eurotux, and in this post I’ll describe one of those by leveraging 3 components that we are using in our customers:

  • Gitlab
  • Terraform
  • Openshift


The application we wanted to deploy is Fess Enterprise Search Server , (“Fess is Elasticsearch-based search server”) and we use it to scan our internal Wiki server and allow our teams to have a “google”-like search engine for that wiki. Fess supports all sorts of targets (file servers, web sites, databases) and it supports several authentication methods such as BASIC/DIGEST/NTLM/FORM (keep this in mind for the next minutes).

We use OpenShift which is a container based orchestration, so the first thing to do is to create a container for the application. Fortunately Fess already provides a base container image which I’ll use as the base container for the project and I will only improve on that. The first thing is to create a Dockerfile:

RUN perl -i -p -e "s/crawler.document.cache.enabled=true/crawler.document.cache.enabled=false/" /etc/fess/
ADD logo-head.png /usr/share/fess/app/images/logo.png
ADD osdd.xml /usr/share/fess/app/WEB-INF/orig/open-search/osdd.xml
ADD logo-head.png /usr/share/fess/app/images/logo-head.png
RUN apt-get update && apt-get -y install libjson-perl
COPY /usr/share/fess/


# We are using oc 3.9 because the later ones require (see
RUN wget && tar zxf oc.tar.gz -C /usr/bin && rm oc.tar.gz

ADD /etc/fess/

I did do some customization (like changing the logo to our company one), changing the entrypoint and adding the oc (openshift client) command. As one can easily understand, our internal wiki is password protected. It is a form-based username/password (you see why it is great that Fess supports form-based authentication) and I only need to provide the Fess server the username and password to access the wiki.

The entrypoint is changed so that when the container starts, will get the username and password from OpenShift secrets (that’s why the container installs the oc command), update the Fess server configuration and start indexing the wiki. As this is a stateless service, I don’t need to worry about saving state and using Persistent Volumes. If the container dies or gets redeployed, the search engine will re-index our wiki. This keeps this project simpler and cleaner. Here is a snippet of the script:

if [ -z "$WIKIUSER" ]; then
    export WIKIUSER="`oc get secret wikiuser --template='{{.data.username}}' | base64 -d`"
if [ -z "$WIKIPASS" ]; then
    export WIKIPASS="`oc get secret wikiuser --template='{{.data.password}}' | base64 -d`"

curl -XPOST "http://localhost:9200/.fess_config.web_authentication/web_authentication" -H 'Content-Type: application/json' -d "
           \"webConfigId\" : \"$CONFIGID\",
           \"updatedTime\" : 1509224726193,
           \"hostname\" : \"\",
           \"password\" : \"$WIKIPASS\",
           \"updatedBy\" : \"admin\",
           \"createdBy\" : \"admin\",
           \"createdTime\" : 1509224726193,
           \"protocolScheme\" : \"FORM\",
           \"username\" : \"$WIKIUSER\",
           \"parameters\" : \"encoding=UTF-8\\nlogin_method=POST\\nlogin_url=\\nlogin_parameters=username=\${username}&password=\${password}&auth_id=1&deki_buttons%5Baction%5D%5Blogin%5D=login\"


We use Terraform to bootstrap the infrastructure required for the deployment of this application, which is responsible for the following:

  • OpenShift Project (Namespace)
  • Secrets (wiki username and password)
  • Granting permissions to the container default service account to access the secret (so that the container can fetch that info)
  • Granting the gitlab runner service account to edit this namespace objects (so that the deployment pipeline can deploy to this namespace)
  • Adding the anyuid scc to the deployer service account. The Fess container runs several services (actually this is an anti-pattern in the container world), and requires to run as root inside the container (later on it changes the uid to another)

Unfortunately, the terraform kubernetes provider is somewhat lacking in features comparing to others (like aws or azure provider). Because of that, I use a mix of internal resources like the kubernetes_namespace and null_resource as a wrapper to the occommand:

# Create namespace
resource "kubernetes_namespace" "search" {
  metadata {
    annotations {
      name = "search-engine"

    labels {
      owner = "npf"

    name = "${var.namespace}"

  lifecycle {
    # because we are using openshift, we have to ignore the annotations as openshift does add some annotations
    ignore_changes = ["metadata.0.annotations"]
# This container requires root, so we need to allow anyuserid
resource "null_resource" "add-scc-anyuid" {
  provisioner "local-exec" {
    command = "oc -n ${} adm policy add-scc-to-user anyuid -z deployer"

  provisioner "local-exec" {
    command = "oc -n ${} adm policy remove-scc-from-user anyuid -z deployer"
    when    = "destroy"

As you can see, I use local-exec to spawn the oc command when there isn’t support for those features in the kubernetes terraform provider. As a result of a terraform apply:


At Eurotux we are using an internal gitlab server to house all our projects. As so, we make extensive use of its’ CI/CD capabilities. To implement the CI/CD I’ve created a .gitlab-ci.yml file to describe the pipeline:

image: $CI_REGISTRY/docker/base-builder

  - review
  - staging
  - production
  - cleanup

  OPENSHIFT_SERVER: https://oshift.install.etux:8443
  OPENSHIFT_DOMAIN: oshift.install.etux

.deploy: &deploy
    - kubernetes
    - ci-bootstrap
    - "oc -n $CI_PROJECT_NAME get services $APP 2> /dev/null || oc -n $CI_PROJECT_NAME new-app fess --name=$APP --strategy=docker"
    - "oc -n $CI_PROJECT_NAME start-build $APP --from-dir=fess --follow || sleep 3s && oc -n $CI_PROJECT_NAME start-build $APP --from-dir=fess --follow"
    - "oc -n $CI_PROJECT_NAME get routes $APP 2> /dev/null || oc -n $CI_PROJECT_NAME create route edge --hostname=$APP_HOST --insecure-policy=Redirect --service=$APP"
  <<: *deploy
  stage: staging
    - kubernetes
    APP: staging
    name: staging
    url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
    - master

  <<: *deploy
  stage: production
    - kubernetes
    APP: production
  when: manual
    name: production
    - master

The pipeline will create a review application when working on a git branch other than master so that I can review and fix things. When a merge (or a commit for that matter) occurs in master, it will deploy automatically to staging and then I can press play to deploy to production. Here is an example of the pipeline:

Here is a snippet of the pipeline running:

After that, i can browse to https://search.oshift.install.etux/ and I’m presented with the search engine webpage:


As you’ve figured out by now, all of this is running in our testing OpenShift cluster. We are using the 3.11 version of OpenShift, which features monitoring using Prometheus and Grafana (later on, I will detail some other interesting features, such as integration with Keycloak). OpenShift automatically provides some Grafana dashboards so that you can see what are the usage patterns:

One of the interesting things that these dashboards present is the lifecycle of the application (starting new containers and stopping the older ones).