Prerequisites
Launching a Kubernetes cluster kubeadm
requires several prerequisites to ensure a smooth setup. Here's a comprehensive list:
1. Hardware Requirements:
Master Node(s):
2 CPU cores
2 GB RAM (more recommended for production)
10 GB free disk space
Preferably t2.medium or higher
Worker Node(s):
1 CPU core
1 GB RAM (more recommended for production)
10 GB free disk space
Preferably t2.micro or higher
2. Software Requirements:
Operating System:
A Linux distribution with a stable release (e.g., Ubuntu, CentOS, Debian)
We are using Ubuntu for this demo
Network Configuration:
Unique hostname, MAC address, and product_uuid for every node.
Full network connectivity between all machines in the cluster (public or private network).
Firewall Configuration:
- Allow required ports (e.g., 6443, 2379-2380, 10250, 10251, 10252, 10255).
Swap:
- Disable swap on all nodes (
sudo swapoff -a
).
- Disable swap on all nodes (
Installing a container runtime
In this hands-on, we will install containerd as the container runtime. To install the required CRI we need :
Enable IPv4 packet forwarding
To manually enable IPv4 packet forwarding:
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
Verify that net.ipv4.ip_forward
is set to 1 with:
sysctl net.ipv4.ip_forward
To install containerd:
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install containerd.io
Now, we need to configure the cgroup for the containerd. I would suggest to auto-generate the configuration file and set systemd as true. Finally, restart the containerd service:
containerd config default > /etc/containerd/config.toml
vim /etc/containerd/config.toml
[edit]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
sudo systemctl restart containerd
Installing kubeadm, kubelet, and kubectl
Now, we need to install kubeadm, kubelet, and kubectl. Follow the steps to do so:
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
NOTE: Note that kubeadm does not install kubelet to the nodes, so it needs to be installed individually but it does install all other control plane components.
Creating the cluster
Next, we must create and initialize the cluster and start using it.
Perform these commands only on the control plane node.
sudo kubeadm init --apiserver-advertise-address <private-ip of the control plane instance> --pod-network-cidr 10.244.0.0/16
sudo kubeadm init
: This initializes the Kubernetes control plane (master) node.--apiserver-advertise-address <private-ip of the control plane instance>
: This specifies the IP address that the API server will advertise to other nodes. You should replace<private-ip of the control plane instance>
it with the actual private IP address of your control plane node.--pod-network-cidr 10.244.0.0/16
: This specifies the CIDR for the pod network. This is required for certain network plugins (e.g., Flannel) to allocate IP addresses for the pods.
Once the K8s control plane is successfully initialized, we are provided with the details of further steps. Firstly, we must copy the admin configuration file from the Kubernetes directory to the .kube directory and provide it with the necessary permissions.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
At this moment you can check by "kubectl get pods -A" to get the status of the pods. You should notice that every control plane component is running except the CoreDNS, this is because we need to configure the pod network.
We can also make the worker node join the cluster before configuring the network. We can do that by running the "kubeadm join" command provided by the control plane.
We should now deploy a Pod network to the cluster. I am using Weave Net but you can also choose from the CNIs from the list of addons provided by the link.
kubectl apply -f https://reweave.azurewebsites.net/k8s/v1.30/net.yaml
We see that 2 new weave net pods are being initialized which would help using in pod networking. Also, the Weave Net is deployed as Daemon Sets so that every node that joins the cluster gets a Weave Net pod to communicate.
Finally, once the pods are initialized, we can schedule pods.