simplified the variables and removed redundant information

This commit is contained in:
2025-06-30 00:05:33 +02:00
parent d5032d4bb4
commit ab0dd018be
13 changed files with 428 additions and 110 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
.env

View File

@ -2,7 +2,7 @@
This guide describes how to deploy a Kubernetes cluster tailored to my environment using the provided `redbutton.sh` script. This guide describes how to deploy a Kubernetes cluster tailored to my environment using the provided `redbutton.sh` script.
Just be sure to edit the `inventory.ini` file for the ansible playbook and the variables in the `init-deploy.sh` script. I think you can scale this out by adding more hosts and adjusting the for loop but I haven't tried it yet. I needed three machines and that is what I worked with. Just be sure to edit the variables in the `init-deploy.sh` script. I think you can scale this out by adding more hosts and adjusting the for loop but I haven't tried it yet. I needed three machines and that is what I worked with.
DO NOT FORGET TO SET YOUR CLOUDFLARE API KEY IN THE CERT-MANAGER MANIFEST FILE!!! DO NOT FORGET TO SET YOUR CLOUDFLARE API KEY IN THE CERT-MANAGER MANIFEST FILE!!!
@ -31,9 +31,10 @@ This setup will create a Kubernetes cluster with the following topology and comp
To initiate the deployment, run the provided script: To initiate the deployment, run the provided script:
```bash ```bash
chmod +x init-deploy.sh echo "CF_API=YourSuperSecretAPIKey" > .env
chmod +x helm-deploy.sh
chmod +x redbutton.sh chmod +x redbutton.sh
./redbutton.sh ./redbutton.sh
``` ```

View File

@ -1,14 +0,0 @@
- name: Install Kubernetes CNI
hosts: k8s_control_plane
become: true
tasks:
- name: Download Calico manifest
get_url:
url: https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml
dest: /tmp/calico.yaml
mode: '0644'
- name: Apply Calico CNI plugin
shell: kubectl apply -f /tmp/calico.yaml
environment:
KUBECONFIG: /etc/kubernetes/admin.conf

View File

@ -1,10 +0,0 @@
[k8s_control_plane]
k8s-control-plane
[k8s_nodes]
k8s-worker1
k8s-worker2
[all:vars]
ansible_user=kevin
ansible_ssh_private_key_file=~/.ssh/id_rsa

View File

@ -1,43 +0,0 @@
- name: Initialize Kubernetes Control Plane
hosts: k8s_control_plane
become: true
vars:
## This is the default networkd of Calico
pod_network_cidr: "192.168.0.0/16"
apiserver_advertise_address: "10.50.3.21"
cri_socket: "unix:///run/containerd/containerd.sock"
kubeconfig_src: "/etc/kubernetes/admin.conf"
kubeconfig_dest: "/home/{{ ansible_user }}/.kube/config"
## Crea la directory .kube nella home dell'utente
tasks:
- name: Ensure .kube directory exists
file:
path: "/home/{{ ansible_user }}/.kube"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
## Crea il cluster con i parametri indicati sopra
- name: Run kubeadm init
command: >
kubeadm init
--pod-network-cidr={{ pod_network_cidr }}
--apiserver-advertise-address={{ apiserver_advertise_address }}
--cri-socket {{ cri_socket }}
args:
creates: /etc/kubernetes/pki/ca.crt
## Copia la configurazione del cluster così da poter eseguire comandi kubectl dall'utente
- name: Copy admin.conf to user's kube config
copy:
src: "{{ kubeconfig_src }}"
dest: "{{ kubeconfig_dest }}"
remote_src: true
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'

View File

@ -98,12 +98,6 @@
- name: Hold Kubernetes packages - name: Hold Kubernetes packages
ansible.builtin.shell: apt-mark hold kubelet kubeadm kubectl ansible.builtin.shell: apt-mark hold kubelet kubeadm kubectl
- name: Enable and start containerd
systemd:
name: containerd
enabled: true
state: started
## Abilita il servizio di kubelet ## Abilita il servizio di kubelet
- name: Enable and start kubelet - name: Enable and start kubelet

View File

@ -49,4 +49,4 @@
state: stopped state: stopped
masked: true masked: true
daemon_reload: yes daemon_reload: yes
ignore_errors: true # in case it's not installed ignore_errors: true

View File

@ -1,5 +1,3 @@
# master-k8s.yaml # master-k8s.yaml
- import_playbook: k8s-install-deps.yaml - import_playbook: k8s-install-deps.yaml
- import_playbook: k8s-init.yaml - import_playbook: longhorn-deps.yaml
- import_playbook: install_cni.yaml
- import_playbook: longhorn-reqs.yaml

View File

@ -1,46 +1,60 @@
#!/bin/bash #!/bin/bash
## Setup MetalLb repo source .env
## Setup all the needed repos
helm repo add metallb https://metallb.github.io/metallb helm repo add metallb https://metallb.github.io/metallb
helm repo add traefik https://traefik.github.io/charts
helm repo add jetstack https://charts.jetstack.io
helm repo add longhorn https://charts.longhorn.io
helm repo update helm repo update
helm install metallb metallb/metallb -n metallb-system --create-namespace
## Let's install metallb!
helm install \
metallb metallb/metallb \
-n metallb-system \
--create-namespace
echo "Waiting for all the pods to start..." echo "Waiting for all the pods to start..."
kubectl wait --namespace metallb-system \ kubectl wait --namespace metallb-system \
--for=condition=Ready pod \ --for=condition=Ready pod \
--all \ --all \
--timeout=120s --timeout=200s
## Apply the load-balancer IPs range
## Apply the load-balancer IPs
kubectl apply -f manifests/metallb/metallb-config.yaml kubectl apply -f manifests/metallb/metallb-config.yaml
## Setup traefik repo ## Let's install traefik!
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install \ helm install \
traefik traefik/traefik \ traefik traefik/traefik \
-n traefik \ -n traefik \
-f ./manifests/traefik/values.yaml \ -f ./manifests/traefik/values.yaml \
--create-namespace --create-namespace
## Setup certmanager repo ## Let's install Cert-Manager!
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \ helm install \
cert-manager jetstack/cert-manager \ cert-manager jetstack/cert-manager \
-n cert-manager \ -n cert-manager \
-f ./manifests/certmanager/values.yaml \ -f ./manifests/certmanager/values.yaml \
--create-namespace --create-namespace
## Set up the cloudflare API token. ## Set up the cloudflare API token secret
kubectl create secret generic cloudflare-api-token-secret \
--from-literal=api-token="$CF_API" \
-n=cert-manager
## Apply the cloudflare Issuer
kubectl apply -f ./manifests/certmanager/issuer-secret.yaml
kubectl apply -f ./manifests/certmanager/issuer.yaml kubectl apply -f ./manifests/certmanager/issuer.yaml
## Setup longhorn repo ## Let's install longhorn!
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn -n longhorn-system --create-namespace helm install longhorn longhorn/longhorn -n longhorn-system --create-namespace
echo "Waiting for all the pods to start..." echo "Waiting for all the pods to start..."
@ -51,5 +65,6 @@ kubectl wait --namespace longhorn-system \
--timeout=120s --timeout=120s
## Apply ingress rule and certificate ## Apply ingress rule and certificate
kubectl apply -f manifests/longhorn/certificate.yaml kubectl apply -f manifests/longhorn/certificate.yaml
kubectl apply -f manifests/longhorn/ingress.yaml kubectl apply -f manifests/longhorn/ingress.yaml

View File

@ -1,23 +1,47 @@
#!/bin/bash #!/bin/bash
KUBE_USER="kevin" KUBE_USER="ubuntu"
PRIVATE_KEY_PATH="$HOME/.ssh/id_ed25519"
KUBE_CP="k8s-control-plane" KUBE_CP="k8s-control-plane"
KUBE_W1="k8s-worker1" KUBE_W1="k8s-worker1"
KUBE_W2="k8s-worker2" KUBE_W2="k8s-worker2"
ANSIBLE_DIR="ansible" ANSIBLE_DIR="ansible"
POD_NET_CIDR="192.168.0.0/16"
CNI_VER="v3.30.2"
API_ADDR=$(getent hosts "$KUBE_CP" | awk '{ print $1 }')
## Prepare all of the nodes with k8s using the ansible playbooks I prepared ## Prepare all of the nodes with k8s using the ansible playbooks I prepared
ansible-playbook -i ./"$ANSIBLE_DIR"/inventory.ini ./"$ANSIBLE_DIR"/master-k8s.yaml ansible-playbook ./"$ANSIBLE_DIR"/master-k8s.yaml \
-i "$KUBE_CP,$KUBE_W1,$KUBE_W2," \
-e "k8s-control-plane=["$KUBE_CP"] k8s_nodes=["$KUBE_W1","$KUBE_W2"]" \
-u "$KUBE_USER" \
--private-key "$PRIVATE_KEY_PATH"
## Bootstrap the cluster
ssh "$KUBE_USER@$KUBE_CP" \
"sudo kubeadm init \
--pod-network-cidr=$POD_NET_CIDR \
--apiserver-advertise-address=$API_ADDR \
--cri-socket unix:///run/containerd/containerd.sock"
ssh "$KUBE_USER@$KUBE_CP" \
"sudo cp /etc/kubernetes/admin.conf /tmp/config && sudo chown $KUBE_USER:$KUBE_USER /tmp/config"
## Fetch the configuration from the freshly installed cluster. BEWARE THAT ANY EXISTING CONFIG WILL BE OVERWRITTEN ## Fetch the configuration from the freshly installed cluster. BEWARE THAT ANY EXISTING CONFIG WILL BE OVERWRITTEN
scp "$KUBE_USER@$KUBE_CP":"$HOME"/.kube/config "$HOME"/.kube/config scp "$KUBE_USER@$KUBE_CP":/tmp/config "$HOME"/.kube/config
## Install the CNI
kubectl apply -f \
"https://raw.githubusercontent.com/projectcalico/calico/$CNI_VER/manifests/calico.yaml"
## Now join the workers to the cluster ## Now join the workers to the cluster
JOIN_TOKEN=$(ssh "$KUBE_USER@$KUBE_CP" "kubeadm token create --print-join-command") JOIN_TOKEN=$(ssh "$KUBE_USER@$KUBE_CP" \
"sudo kubeadm token create --print-join-command")
for NODE in "$KUBE_W1" "$KUBE_W2"; do for NODE in "$KUBE_W1" "$KUBE_W2"; do
echo "Joining $NODE" echo "Joining $NODE"

360
log.log Normal file
View File

@ -0,0 +1,360 @@
PLAY [Install Kubernetes packages] *********************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-control-plane]
ok: [k8s-worker1]
ok: [k8s-worker2]
TASK [Ensure apt cache is updated] *********************************************
ok: [k8s-worker1]
ok: [k8s-worker2]
ok: [k8s-control-plane]
TASK [Install required packages] ***********************************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Ensure containerd config directory exists] *******************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Generate default containerd config if missing] ***************************
ok: [k8s-worker2]
ok: [k8s-control-plane]
ok: [k8s-worker1]
TASK [Write default containerd config] *****************************************
changed: [k8s-worker1]
changed: [k8s-worker2]
changed: [k8s-control-plane]
TASK [Set SystemdCgroup = true in containerd config] ***************************
changed: [k8s-control-plane]
changed: [k8s-worker2]
changed: [k8s-worker1]
TASK [Restart containerd] ******************************************************
changed: [k8s-worker2]
changed: [k8s-control-plane]
changed: [k8s-worker1]
TASK [Enable IPv4 forwarding at runtime] ***************************************
changed: [k8s-control-plane]
changed: [k8s-worker1]
changed: [k8s-worker2]
TASK [Create /etc/apt/keyrings directory] **************************************
ok: [k8s-control-plane]
ok: [k8s-worker2]
ok: [k8s-worker1]
TASK [Download and save the Kubernetes APT key] ********************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Add Kubernetes APT repository] *******************************************
changed: [k8s-worker1]
changed: [k8s-worker2]
changed: [k8s-control-plane]
TASK [Update apt cache after adding Kubernetes repo] ***************************
changed: [k8s-worker2]
changed: [k8s-control-plane]
changed: [k8s-worker1]
TASK [Install Kubernetes components] *******************************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Hold Kubernetes packages] ************************************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Enable and start kubelet] ************************************************
changed: [k8s-worker2]
changed: [k8s-control-plane]
changed: [k8s-worker1]
PLAY [Setup Longhorn Requirements] *********************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-worker2]
ok: [k8s-control-plane]
ok: [k8s-worker1]
TASK [Install required packages] ***********************************************
changed: [k8s-worker1]
changed: [k8s-worker2]
changed: [k8s-control-plane]
TASK [Ensure iscsi_tcp kernel module is loaded] ********************************
changed: [k8s-worker2]
changed: [k8s-control-plane]
changed: [k8s-worker1]
TASK [Ensure dm_crypt kernel module is loaded] *********************************
changed: [k8s-worker2]
changed: [k8s-worker1]
changed: [k8s-control-plane]
TASK [Ensure iscsi_tcp module loads on boot] ***********************************
changed: [k8s-worker1]
changed: [k8s-control-plane]
changed: [k8s-worker2]
TASK [Ensure dm_crypt module loads on boot] ************************************
changed: [k8s-worker2]
changed: [k8s-worker1]
changed: [k8s-control-plane]
TASK [Disable and stop multipathd service] *************************************
changed: [k8s-control-plane]
changed: [k8s-worker1]
changed: [k8s-worker2]
PLAY RECAP *********************************************************************
k8s-control-plane : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
k8s-worker1 : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
k8s-worker2 : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[init] Using Kubernetes version: v1.33.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.50.3.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.50.3.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.50.3.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.576719ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://10.50.3.21:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 2.567736516s
[control-plane-check] kube-scheduler is healthy after 3.83121548s
[control-plane-check] kube-apiserver is healthy after 6.002458047s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: o95x90.u0mpm7siuyqgi3jz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.50.3.21:6443 --token o95x90.u0mpm7siuyqgi3jz \
--discovery-token-ca-cert-hash sha256:f9095c75e5f29b548dec3d40304d02a7d4789189a5065f64df1661fb75ff8cdd
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/stagedglobalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/stagedkubernetesnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/stagednetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/baselineadminnetworkpolicies.policy.networking.k8s.io created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrole.rbac.authorization.k8s.io/calico-tier-getter created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-tier-getter created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
Joining k8s-worker1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001473987s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Joining k8s-worker2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 500.681999ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
"metallb" has been added to your repositories
"traefik" has been added to your repositories
"jetstack" has been added to your repositories
"longhorn" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "metallb" chart repository
...Successfully got an update from the "longhorn" chart repository
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "jetstack" chart repository
Update Complete. ⎈Happy Helming!⎈
NAME: metallb
LAST DEPLOYED: Sun Jun 29 23:53:49 2025
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.
Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
Waiting for all the pods to start...
pod/metallb-controller-5754956df6-6cjbg condition met
ipaddresspool.metallb.io/ip-pool created
l2advertisement.metallb.io/l2 created
NAME: traefik
LAST DEPLOYED: Sun Jun 29 23:55:52 2025
NAMESPACE: traefik
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
traefik with docker.io/traefik:v3.4.1 has been deployed successfully on traefik namespace !
NAME: cert-manager
LAST DEPLOYED: Sun Jun 29 23:55:53 2025
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
⚠️ WARNING: New default private key rotation policy for Certificate resources.
The default private key rotation policy for Certificate resources was
changed to `Always` in cert-manager >= v1.18.0.
Learn more in the [1.18 release notes](https://cert-manager.io/docs/releases/release-notes/release-notes-1.18).
cert-manager v1.18.1 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
secret/cloudflare-api-token-secret created
clusterissuer.cert-manager.io/cloudflare-clusterissuer created
NAME: longhorn
LAST DEPLOYED: Sun Jun 29 23:56:13 2025
NAMESPACE: longhorn-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Longhorn is now installed on the cluster!
Please wait a few minutes for other Longhorn components such as CSI deployments, Engine Images, and Instance Managers to be initialized.
Visit our documentation at https://longhorn.io/docs/
Waiting for all the pods to start...
pod/longhorn-driver-deployer-54599c447c-n2p4k condition met
pod/longhorn-manager-f9jlp condition met
pod/longhorn-manager-xxctk condition met
pod/longhorn-ui-6b44759848-jnkpw condition met
pod/longhorn-ui-6b44759848-sltk5 condition met
certificate.cert-manager.io/longhorn-ssl-certificate created
ingress.networking.k8s.io/longhorn-ingress created

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
namespace: cert-manager
type: Opaque
stringData:
api-token: <API TOKEN>

View File

@ -1,4 +1,4 @@
#!/bin/bash #!/bin/bash
./init-deploy.sh bash ./init-deploy.sh
./helm-deploy.sh bash ./helm-deploy.sh