Simple opinionated high availability Kubernetes cluster deployed in libvirt.
APIServer connections can be made highly available for kube-controller-manager and scheduler as well and can be toggled with the kcm_scheduler_with_ha_apiserver_connection variable in vars.yaml (WARNING: setting to true can cause problems with upgrades).
- Install ansible requirements:
ansible-galaxy collection install -r requirements.yml - Copy the
hosts.examplefile into thehostsfile. - Create
kubehanetwork withansible-playbook base/network-init.yml - Prepare the base VM
- Install fedora or fedora rawhide and name it fedora-base. Select
kubehanetwork as the source of the VM's NIC. - Input the base VM name and IP address into
hostsfile - Start the base VM and create ssh keys:
ansible-playbook base/base-vm-start.yml - Enable sshd and permit root ssh access to the VM.
- Copy the public key into the VM:
ssh-copy-id -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null" -i auth/id_rsa root@${BASE_VM_IP} - Prepare the base VM and turn it off:
ansible-playbook base/base-vm-prepare.yml
- Install fedora or fedora rawhide and name it fedora-base. Select
- Clone fedora-base into as many masters and workers as desired via
virt-manager. - Start all the VMs to obtain generated IP addresses.
- Insert the VM names and IP addresses into the
hostsfile. - Optionally regenerate ssh host keys in all the VMs:
rm /etc/ssh/ssh_host_* && ssh-keygen -A && systemctl restart sshd - Inspect vars.yaml for any customization.
- Install the DNS servers:
ansible-playbook install-dns.yml - Install the cluster:
ansible-playbook install-cluster.yml - Either add
dnsgroup IPs from the./hostsfile as your DNS server, or add the following entry to your hosts file:echo '192.168.150.2 api-kube.kubeha.knet' >> /etc/hosts - Use ./lifecycle and ./cluster scripts for management of the cluster.
Run ansible-playbook cluster/upgrade-to-latest.yml to upgrade the cluster, the system and its packages to the latest version.
This option will only upgrade or downgrade the kubernetes packages, not the whole system. There is no guarantee that this will work.
- Set a
k8s_versionvariable invars.yamlto the desired Kubernetes version. - Run
ansible-playbook cluster/upgrade-downgrade-to-version.ymlto upgrade or downgrade the cluster. - The
force_upgrade_downgradevariable can be set totrueinvars.yamlif you encounter errors (e.g. when downgrading).
Upgrade versions and validations can be checked by sshing into a master node and running kubeadm upgrade plan.