Files
k8s/log.log

361 lines
17 KiB
Plaintext

PLAY [Install Kubernetes packages] *********************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-control-plane]
ok: [k8s-worker1]
ok: [k8s-worker2]
TASK [Ensure apt cache is updated] *********************************************
ok: [k8s-worker1]
ok: [k8s-worker2]
ok: [k8s-control-plane]
TASK [Install required packages] ***********************************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Ensure containerd config directory exists] *******************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Generate default containerd config if missing] ***************************
ok: [k8s-worker2]
ok: [k8s-control-plane]
ok: [k8s-worker1]
TASK [Write default containerd config] *****************************************
changed: [k8s-worker1]
changed: [k8s-worker2]
changed: [k8s-control-plane]
TASK [Set SystemdCgroup = true in containerd config] ***************************
changed: [k8s-control-plane]
changed: [k8s-worker2]
changed: [k8s-worker1]
TASK [Restart containerd] ******************************************************
changed: [k8s-worker2]
changed: [k8s-control-plane]
changed: [k8s-worker1]
TASK [Enable IPv4 forwarding at runtime] ***************************************
changed: [k8s-control-plane]
changed: [k8s-worker1]
changed: [k8s-worker2]
TASK [Create /etc/apt/keyrings directory] **************************************
ok: [k8s-control-plane]
ok: [k8s-worker2]
ok: [k8s-worker1]
TASK [Download and save the Kubernetes APT key] ********************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Add Kubernetes APT repository] *******************************************
changed: [k8s-worker1]
changed: [k8s-worker2]
changed: [k8s-control-plane]
TASK [Update apt cache after adding Kubernetes repo] ***************************
changed: [k8s-worker2]
changed: [k8s-control-plane]
changed: [k8s-worker1]
TASK [Install Kubernetes components] *******************************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Hold Kubernetes packages] ************************************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Enable and start kubelet] ************************************************
changed: [k8s-worker2]
changed: [k8s-control-plane]
changed: [k8s-worker1]
PLAY [Setup Longhorn Requirements] *********************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-worker2]
ok: [k8s-control-plane]
ok: [k8s-worker1]
TASK [Install required packages] ***********************************************
changed: [k8s-worker1]
changed: [k8s-worker2]
changed: [k8s-control-plane]
TASK [Ensure iscsi_tcp kernel module is loaded] ********************************
changed: [k8s-worker2]
changed: [k8s-control-plane]
changed: [k8s-worker1]
TASK [Ensure dm_crypt kernel module is loaded] *********************************
changed: [k8s-worker2]
changed: [k8s-worker1]
changed: [k8s-control-plane]
TASK [Ensure iscsi_tcp module loads on boot] ***********************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Ensure dm_crypt module loads on boot] ************************************
changed: [k8s-worker2]
changed: [k8s-worker1]
changed: [k8s-control-plane]
TASK [Disable and stop multipathd service] *************************************
changed: [k8s-control-plane]
changed: [k8s-worker1]
changed: [k8s-worker2]
PLAY RECAP *********************************************************************
k8s-control-plane : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
k8s-worker1 : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
k8s-worker2 : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[init] Using Kubernetes version: v1.33.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.50.3.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.50.3.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.50.3.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.576719ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://10.50.3.21:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 2.567736516s
[control-plane-check] kube-scheduler is healthy after 3.83121548s
[control-plane-check] kube-apiserver is healthy after 6.002458047s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: o95x90.u0mpm7siuyqgi3jz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.50.3.21:6443 --token o95x90.u0mpm7siuyqgi3jz \
--discovery-token-ca-cert-hash sha256:f9095c75e5f29b548dec3d40304d02a7d4789189a5065f64df1661fb75ff8cdd
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/stagedglobalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/stagedkubernetesnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/stagednetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/baselineadminnetworkpolicies.policy.networking.k8s.io created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrole.rbac.authorization.k8s.io/calico-tier-getter created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-tier-getter created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
Joining k8s-worker1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001473987s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Joining k8s-worker2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 500.681999ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
"metallb" has been added to your repositories
"traefik" has been added to your repositories
"jetstack" has been added to your repositories
"longhorn" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "metallb" chart repository
...Successfully got an update from the "longhorn" chart repository
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "jetstack" chart repository
Update Complete. ⎈Happy Helming!⎈
NAME: metallb
LAST DEPLOYED: Sun Jun 29 23:53:49 2025
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.
Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
Waiting for all the pods to start...
pod/metallb-controller-5754956df6-6cjbg condition met
ipaddresspool.metallb.io/ip-pool created
l2advertisement.metallb.io/l2 created
NAME: traefik
LAST DEPLOYED: Sun Jun 29 23:55:52 2025
NAMESPACE: traefik
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
traefik with docker.io/traefik:v3.4.1 has been deployed successfully on traefik namespace !
NAME: cert-manager
LAST DEPLOYED: Sun Jun 29 23:55:53 2025
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
⚠️ WARNING: New default private key rotation policy for Certificate resources.
The default private key rotation policy for Certificate resources was
changed to `Always` in cert-manager >= v1.18.0.
Learn more in the [1.18 release notes](https://cert-manager.io/docs/releases/release-notes/release-notes-1.18).
cert-manager v1.18.1 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
secret/cloudflare-api-token-secret created
clusterissuer.cert-manager.io/cloudflare-clusterissuer created
NAME: longhorn
LAST DEPLOYED: Sun Jun 29 23:56:13 2025
NAMESPACE: longhorn-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Longhorn is now installed on the cluster!
Please wait a few minutes for other Longhorn components such as CSI deployments, Engine Images, and Instance Managers to be initialized.
Visit our documentation at https://longhorn.io/docs/
Waiting for all the pods to start...
pod/longhorn-driver-deployer-54599c447c-n2p4k condition met
pod/longhorn-manager-f9jlp condition met
pod/longhorn-manager-xxctk condition met
pod/longhorn-ui-6b44759848-jnkpw condition met
pod/longhorn-ui-6b44759848-sltk5 condition met
certificate.cert-manager.io/longhorn-ssl-certificate created
ingress.networking.k8s.io/longhorn-ingress created