Automate Kubernetes Cluster using Ansible

In this blog, we are going to see what Kubernetes is and about the Kubernetes cluster.

What is Kubernetes?

Kubernetes is an open-source container orchestration engine for automating the deployment, scaling, and management of containerized applications. The open-source project is hosted by the Cloud Native Computing Foundation (CNCF). So Kubernetes is a tool that can manage the containers and monitor the container if any container goes down then Kubernetes automatically launches it again in a sec without any Human Intervention. They use the concept of a Replicas Controller to launch the pod automatically. So there are many resources in Kubernetes that have their own role. Some resources are RC, RS, POD, SERVICE, SECRET, PVC, DEPLOYMENT ETC.

Kubernetes Cluster

A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster. At a minimum, a cluster contains a control plane and one or more computing machines, or nodes. As we know Kubernetes run in a cluster there are two types of clusters that are SINGLE-NODE OR MULTI-NODE CLUSTER.

For reference, you can use the below link to launch AWS EC2-Instance by using Ansible

https://www.linkedin.com/posts/nehal-ingole_arth-vimaldaga-righteducation-activity-6787632452111237120-fCGO

Let's see how we actually create a Kubernetes cluster using Ansible

Prerequisites for Multi-Node Cluster

Hardware Requirements

One or more machines running one of the:

  • Ubuntu 16.04+

  • Debian 9

  • CentOS 7

  • RHEL 7

  • Fedora 25/26 (best effort)

  • HypriotOS v1.0.1+

  • Cloud Computing

Minimal required memory & CPU (cores)

  • The Master node’s minimal required memory is 2GB and the worker node needs a minimum is 1GB

  • The master node needs at least 1.5 and the worker node needs at least 0.7 cores.

Cluster setup:

For Installing Setup we have to do some Environment ready for the cluster.

So First of all it requires us to do the setup. In my case I use AWS.So First we have to make Instances with 1GB RAM (Note Cluster setup requires a minimum 2GB RAM. In this part we are going to see how to ignore this requirement).

Master node configuration

I am going to use Linux ami2, AWS does not have any repository for downloading Kubernetes and other resources. For that, we need to create one repo for Kubernetes. so Playbook for master node:-

Adding Kubeadm repositories On the Master node

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications.

Now for running the service we need separate containers, either we can download them one by one manually or we can directly use the below command to pull all the necessary images from the docker hub.

kubeadm config images pull

One of the requirements is to set up the driver to systemd (by default driver is cgroup, you can check it using the docker info command), by using the below command to do the same.

Open the file and paste the below paste:- vim /etc/docker/daemon.json

{

“exec-opts”: [“native.cgroupdriver=systemd”]

}

After making changes, restart the docker service

systemctl restart docker

One more requirement for networking is to have iproute-tc software and download it using the yum command.

yum install iproute-tc -y

Now we also need to specify the range of IP addresses we want to assign to the pods or containers, also because we are doing the setup on top AWS, we might not have enough RAM or CPU in the system so we can ignore these warnings/errors using “ — ignore-preflight-errors” keyword.

Kubeadm init — pod-network-cidr=10.240.0.0/16 -ignore-preflight-errors=NumCPU — ignore-preflight-errors=Mem

Now the system is configured as a master.

Normally we have a separate client who will use the kubectl command on the master, but just for testing, we can make the master the client/user. Now if you run the “kubectl” command, it will fail (we already have kubectl software in the system). It will fail because the client should always know where the master is running (the IP of the master), so the client should know the port number of the API, and the username and password of a master, so to use this cluster as a normal user, you can copy below files in the HOME location, the files contain all the credentials of a master node.

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now the system is also configured as a client.

To configure some extra networks, just run the below commands for flannel configuration.

kubectl apply -f raw.githubusercontent.com/coreos/flannel/ma..

To generate a token so that worker nodes can join:

kubeadm token create — print-join-command

let's see above all this command in this ansible playbook how it looks

Playbook for config Masternode

Worker node configuration

There is not much difference for creating worker nodes respective to the Master nodes we have to use the same.

In worker nodes also, we need to do the same setup as the master node except for a few commands like pulling docker images because it’s not required in the worker node.

sudo su — root

yum install docker -y

systemctl enable docker — now

vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=packages.cloud.google.com/yum/repos/kuberne..

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=packages.cloud.google.com/yum/doc/yum-key.gpg packages.cloud.google.com/yum/doc/rpm-packa..

exclude=kubelet kubeadm kubectl

yum install kubeadm — disableexcludes=Kubernetes

systemctl enable kubelet — now

vim /etc/docker/daemon.json

{

“exec-opts”: [“native.cgroupdriver=systemd”]

}

systemctl restart docker

yum install iproute-tc -y

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sudo sysctl — system

Let's see in the playbook Slave node

Run ansible_playbook to setup the Kubernetes cluster

after successfully running the playbook you can see the instances launch successfully.

Playbook for running the WordPress and Expose it. (i run this playbook at the time of running the Kubernetes.yml file ).

we can see the WordPress page

You can see the Kubernetes launch successfully and WordPress

K8S_Slave

Ansible-Galaxy Roles:-

GitHub Link: https://github.com/nehal689/Kubernetes_Cluster_Using_ansible.git

That's all for this blog and stay tuned for more Kubernetes and Ansible tutorials and more such tech. Make sure to subscribe to our newsletter.

Thank you for Reading:)

#Happy Reading!!

Any query and suggestion are always welcome - Nehal Ingole