k0s maintains two Certificate Authorities and one public/private key pair:
These CAs are automatically created during cluster initialization and have a default expiration period of 10 years. They are distributed once to all k0s controllers as part of k0s's join process. Replacing them is a manual process, as k0s currently lacks automation for CA renewal.
The following steps describe a way how to manually replace the Kubernetes CA and SA key pair by taking a cluster down, regenerating those and redistributing them to all nodes, and then bringing the cluster back online:
Take a backup! Things might go wrong at any level.
Stop k0s on all worker and controller nodes. All the instructions below
assume that all k0s nodes are using the default data directory
/var/lib/k0s
. Please adjust accordingly if you're using a different data
directory path.
Delete the Kubernetes CA and SA key pair files from the all the controller data directories:
/var/lib/k0s/pki/ca.crt
/var/lib/k0s/pki/ca.key
/var/lib/k0s/pki/sa.pub
/var/lib/k0s/pki/sa.key
Delete the kubelet's kubeconfig file and the kubelet's PKI directory from all
worker data directories. Note that this includes controllers that have been
started with the --enable-worker
flag:
/var/lib/k0s/kubelet.conf
/var/lib/k0s/kubelet/pki
Choose one controller as the "first" one. Restart k0s on the first
controller. If this controller is running with the --enable-worker
flag,
you should reboot the machine instead. This will ensure that all
processes and pods will be cleanly restarted. After the restart, k0s will
have regenerated a new Kubernetes CA and SA key pair.
Distribute the new CA and SA key pair to the other controllers: Copy over the following files from the first controller to each of the remaining controllers:
/var/lib/k0s/pki/ca.crt
/var/lib/k0s/pki/ca.key
/var/lib/k0s/pki/sa.pub
/var/lib/k0s/pki/sa.key
After copying the files, the new CA and SA key pair are in place. Restart k0s
on the other controllers. For controllers running with the --enable-worker
flag, reboot the machines instead.
Rejoin all workers. The easiest way to do this is to use a
kubelet-bootstrap.conf
file. You can generate
such a file on a controller like this (see the section on join tokens for
details):
touch /tmp/rejoin-token &&
chmod 0600 /tmp/rejoin-token &&
k0s token create --expiry 1h |
base64 -d |
gunzip >/tmp/rejoin-token
Copy that token to each worker node and place it at
/var/lib/k0s/kubelet-bootstrap.conf
. Then reboot the machine.
When all workers are back online, the kubelet-bootstrap.conf
files can be
safely removed from the workers. You can also invalidate the token so you
don't have to wait for it to expire: Use k0s token list --role worker
to list all tokens and k0s token invalidate <token-id>
to invalidate them immediately.
Вы можете оставить комментарий после Вход в систему
Неприемлемый контент может быть отображен здесь и не будет показан на странице. Вы можете проверить и изменить его с помощью соответствующей функции редактирования.
Если вы подтверждаете, что содержание не содержит непристойной лексики/перенаправления на рекламу/насилия/вульгарной порнографии/нарушений/пиратства/ложного/незначительного или незаконного контента, связанного с национальными законами и предписаниями, вы можете нажать «Отправить» для подачи апелляции, и мы обработаем ее как можно скорее.
Опубликовать ( 0 )