consolidated more variables and deleted all hard-coded values
This commit is contained in:
10
README.md
10
README.md
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
This guide describes how to deploy a Kubernetes cluster tailored to my environment using the provided `redbutton.sh` script.
|
This guide describes how to deploy a Kubernetes cluster tailored to my environment using the provided `redbutton.sh` script.
|
||||||
|
|
||||||
Just be sure to edit the variables in the `init-deploy.sh` script. I think you can scale this out by adding more hosts and adjusting the for loop but I haven't tried it yet. I needed three machines and that is what I worked with.
|
Just be sure to edit the variables in the `redbutton.sh` script. I think you can scale this out by adding more hosts and adjusting the for loop but I haven't tried it yet. I needed three machines and that is what I worked with.
|
||||||
|
|
||||||
## ⚙️ Pre-requisites
|
## ⚙️ Pre-requisites
|
||||||
|
|
||||||
@ -19,10 +19,10 @@ This setup will create a Kubernetes cluster with the following topology and comp
|
|||||||
- **1 Control Plane Node**
|
- **1 Control Plane Node**
|
||||||
- **2 Worker Nodes**
|
- **2 Worker Nodes**
|
||||||
- **Calico** as the CNI (Container Network Interface)
|
- **Calico** as the CNI (Container Network Interface)
|
||||||
- **MetalLB** as the cloud load balancer for bare metal services (change the configmap to adapt it to your network)
|
- **MetalLB** as the cloud load balancer for bare metal services on your desired range
|
||||||
- **Traefik** as the ingress controller on IP 10.50.3.30 (change it in the values.yaml to adapt it to your network)
|
- **Traefik** as the ingress controller on your desired IP
|
||||||
- **Cert-Manager** to manage the internal certificates. Remember to update the API key!
|
- **Cert-Manager** to manage the internal certificates. Remember to set the API Key!
|
||||||
- **Longhorn** as the persistent storage provider (change the host to adapt it to your network)
|
- **Longhorn** as the persistent storage provider to your desired hostname
|
||||||
|
|
||||||
## 🚀 How to Deploy
|
## 🚀 How to Deploy
|
||||||
|
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
source .env
|
source .env
|
||||||
|
LB_IP="10.50.3.35"
|
||||||
|
IP_POOL_RANGE="10.50.3.30-10.50.3.40"
|
||||||
|
|
||||||
## Setup all the needed repos
|
## Setup all the needed repos
|
||||||
helm repo add metallb https://metallb.github.io/metallb
|
helm repo add metallb https://metallb.github.io/metallb
|
||||||
@ -25,7 +27,7 @@ kubectl wait --namespace metallb-system \
|
|||||||
|
|
||||||
## Apply the load-balancer IPs range
|
## Apply the load-balancer IPs range
|
||||||
|
|
||||||
kubectl apply -f manifests/metallb/metallb-config.yaml
|
envsubst < ./manifests/metallb/metallb-config.yaml | kubectl apply -f -
|
||||||
|
|
||||||
## Let's install traefik!
|
## Let's install traefik!
|
||||||
|
|
||||||
@ -33,6 +35,7 @@ helm install \
|
|||||||
traefik traefik/traefik \
|
traefik traefik/traefik \
|
||||||
-n traefik \
|
-n traefik \
|
||||||
-f ./manifests/traefik/values.yaml \
|
-f ./manifests/traefik/values.yaml \
|
||||||
|
--set service.spec.loadBalancerIP="$LB_IP" \
|
||||||
--create-namespace
|
--create-namespace
|
||||||
|
|
||||||
## Let's install Cert-Manager!
|
## Let's install Cert-Manager!
|
||||||
|
360
log.log
360
log.log
@ -1,360 +0,0 @@
|
|||||||
|
|
||||||
PLAY [Install Kubernetes packages] *********************************************
|
|
||||||
|
|
||||||
TASK [Gathering Facts] *********************************************************
|
|
||||||
ok: [k8s-control-plane]
|
|
||||||
ok: [k8s-worker1]
|
|
||||||
ok: [k8s-worker2]
|
|
||||||
|
|
||||||
TASK [Ensure apt cache is updated] *********************************************
|
|
||||||
ok: [k8s-worker1]
|
|
||||||
ok: [k8s-worker2]
|
|
||||||
ok: [k8s-control-plane]
|
|
||||||
|
|
||||||
TASK [Install required packages] ***********************************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
|
|
||||||
TASK [Ensure containerd config directory exists] *******************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
|
|
||||||
TASK [Generate default containerd config if missing] ***************************
|
|
||||||
ok: [k8s-worker2]
|
|
||||||
ok: [k8s-control-plane]
|
|
||||||
ok: [k8s-worker1]
|
|
||||||
|
|
||||||
TASK [Write default containerd config] *****************************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
|
|
||||||
TASK [Set SystemdCgroup = true in containerd config] ***************************
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
|
|
||||||
TASK [Restart containerd] ******************************************************
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
|
|
||||||
TASK [Enable IPv4 forwarding at runtime] ***************************************
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
|
|
||||||
TASK [Create /etc/apt/keyrings directory] **************************************
|
|
||||||
ok: [k8s-control-plane]
|
|
||||||
ok: [k8s-worker2]
|
|
||||||
ok: [k8s-worker1]
|
|
||||||
|
|
||||||
TASK [Download and save the Kubernetes APT key] ********************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
|
|
||||||
TASK [Add Kubernetes APT repository] *******************************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
|
|
||||||
TASK [Update apt cache after adding Kubernetes repo] ***************************
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
|
|
||||||
TASK [Install Kubernetes components] *******************************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
|
|
||||||
TASK [Hold Kubernetes packages] ************************************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
|
|
||||||
TASK [Enable and start kubelet] ************************************************
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
|
|
||||||
PLAY [Setup Longhorn Requirements] *********************************************
|
|
||||||
|
|
||||||
TASK [Gathering Facts] *********************************************************
|
|
||||||
ok: [k8s-worker2]
|
|
||||||
ok: [k8s-control-plane]
|
|
||||||
ok: [k8s-worker1]
|
|
||||||
|
|
||||||
TASK [Install required packages] ***********************************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
|
|
||||||
TASK [Ensure iscsi_tcp kernel module is loaded] ********************************
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
|
|
||||||
TASK [Ensure dm_crypt kernel module is loaded] *********************************
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
|
|
||||||
TASK [Ensure iscsi_tcp module loads on boot] ***********************************
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
|
|
||||||
TASK [Ensure dm_crypt module loads on boot] ************************************
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
|
|
||||||
TASK [Disable and stop multipathd service] *************************************
|
|
||||||
changed: [k8s-control-plane]
|
|
||||||
changed: [k8s-worker1]
|
|
||||||
changed: [k8s-worker2]
|
|
||||||
|
|
||||||
PLAY RECAP *********************************************************************
|
|
||||||
k8s-control-plane : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
|
||||||
k8s-worker1 : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
|
||||||
k8s-worker2 : ok=23 changed=18 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
|
||||||
|
|
||||||
[init] Using Kubernetes version: v1.33.2
|
|
||||||
[preflight] Running pre-flight checks
|
|
||||||
[preflight] Pulling images required for setting up a Kubernetes cluster
|
|
||||||
[preflight] This might take a minute or two, depending on the speed of your internet connection
|
|
||||||
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
|
|
||||||
[certs] Using certificateDir folder "/etc/kubernetes/pki"
|
|
||||||
[certs] Generating "ca" certificate and key
|
|
||||||
[certs] Generating "apiserver" certificate and key
|
|
||||||
[certs] apiserver serving cert is signed for DNS names [k8s-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.50.3.21]
|
|
||||||
[certs] Generating "apiserver-kubelet-client" certificate and key
|
|
||||||
[certs] Generating "front-proxy-ca" certificate and key
|
|
||||||
[certs] Generating "front-proxy-client" certificate and key
|
|
||||||
[certs] Generating "etcd/ca" certificate and key
|
|
||||||
[certs] Generating "etcd/server" certificate and key
|
|
||||||
[certs] etcd/server serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.50.3.21 127.0.0.1 ::1]
|
|
||||||
[certs] Generating "etcd/peer" certificate and key
|
|
||||||
[certs] etcd/peer serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.50.3.21 127.0.0.1 ::1]
|
|
||||||
[certs] Generating "etcd/healthcheck-client" certificate and key
|
|
||||||
[certs] Generating "apiserver-etcd-client" certificate and key
|
|
||||||
[certs] Generating "sa" key and public key
|
|
||||||
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
|
|
||||||
[kubeconfig] Writing "admin.conf" kubeconfig file
|
|
||||||
[kubeconfig] Writing "super-admin.conf" kubeconfig file
|
|
||||||
[kubeconfig] Writing "kubelet.conf" kubeconfig file
|
|
||||||
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
|
|
||||||
[kubeconfig] Writing "scheduler.conf" kubeconfig file
|
|
||||||
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
|
|
||||||
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
|
|
||||||
[control-plane] Creating static Pod manifest for "kube-apiserver"
|
|
||||||
[control-plane] Creating static Pod manifest for "kube-controller-manager"
|
|
||||||
[control-plane] Creating static Pod manifest for "kube-scheduler"
|
|
||||||
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
|
|
||||||
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
|
|
||||||
[kubelet-start] Starting the kubelet
|
|
||||||
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
|
|
||||||
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
|
|
||||||
[kubelet-check] The kubelet is healthy after 501.576719ms
|
|
||||||
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
|
|
||||||
[control-plane-check] Checking kube-apiserver at https://10.50.3.21:6443/livez
|
|
||||||
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
|
|
||||||
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
|
|
||||||
[control-plane-check] kube-controller-manager is healthy after 2.567736516s
|
|
||||||
[control-plane-check] kube-scheduler is healthy after 3.83121548s
|
|
||||||
[control-plane-check] kube-apiserver is healthy after 6.002458047s
|
|
||||||
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
|
|
||||||
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
|
|
||||||
[upload-certs] Skipping phase. Please see --upload-certs
|
|
||||||
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
|
|
||||||
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
|
|
||||||
[bootstrap-token] Using token: o95x90.u0mpm7siuyqgi3jz
|
|
||||||
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
|
|
||||||
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
|
|
||||||
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
|
|
||||||
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
|
|
||||||
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
|
|
||||||
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
|
|
||||||
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
|
|
||||||
[addons] Applied essential addon: CoreDNS
|
|
||||||
[addons] Applied essential addon: kube-proxy
|
|
||||||
|
|
||||||
Your Kubernetes control-plane has initialized successfully!
|
|
||||||
|
|
||||||
To start using your cluster, you need to run the following as a regular user:
|
|
||||||
|
|
||||||
mkdir -p $HOME/.kube
|
|
||||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
|
||||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
|
||||||
|
|
||||||
Alternatively, if you are the root user, you can run:
|
|
||||||
|
|
||||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
|
||||||
|
|
||||||
You should now deploy a pod network to the cluster.
|
|
||||||
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
|
|
||||||
https://kubernetes.io/docs/concepts/cluster-administration/addons/
|
|
||||||
|
|
||||||
Then you can join any number of worker nodes by running the following on each as root:
|
|
||||||
|
|
||||||
kubeadm join 10.50.3.21:6443 --token o95x90.u0mpm7siuyqgi3jz \
|
|
||||||
--discovery-token-ca-cert-hash sha256:f9095c75e5f29b548dec3d40304d02a7d4789189a5065f64df1661fb75ff8cdd
|
|
||||||
poddisruptionbudget.policy/calico-kube-controllers created
|
|
||||||
serviceaccount/calico-kube-controllers created
|
|
||||||
serviceaccount/calico-node created
|
|
||||||
serviceaccount/calico-cni-plugin created
|
|
||||||
configmap/calico-config created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/stagedglobalnetworkpolicies.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/stagedkubernetesnetworkpolicies.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/stagednetworkpolicies.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created
|
|
||||||
customresourcedefinition.apiextensions.k8s.io/baselineadminnetworkpolicies.policy.networking.k8s.io created
|
|
||||||
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
|
|
||||||
clusterrole.rbac.authorization.k8s.io/calico-node created
|
|
||||||
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
|
|
||||||
clusterrole.rbac.authorization.k8s.io/calico-tier-getter created
|
|
||||||
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
|
|
||||||
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
|
|
||||||
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
|
|
||||||
clusterrolebinding.rbac.authorization.k8s.io/calico-tier-getter created
|
|
||||||
daemonset.apps/calico-node created
|
|
||||||
deployment.apps/calico-kube-controllers created
|
|
||||||
Joining k8s-worker1
|
|
||||||
[preflight] Running pre-flight checks
|
|
||||||
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
|
|
||||||
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
|
|
||||||
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
|
|
||||||
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
|
|
||||||
[kubelet-start] Starting the kubelet
|
|
||||||
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
|
|
||||||
[kubelet-check] The kubelet is healthy after 1.001473987s
|
|
||||||
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
|
|
||||||
|
|
||||||
This node has joined the cluster:
|
|
||||||
* Certificate signing request was sent to apiserver and a response was received.
|
|
||||||
* The Kubelet was informed of the new secure connection details.
|
|
||||||
|
|
||||||
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
|
|
||||||
|
|
||||||
Joining k8s-worker2
|
|
||||||
[preflight] Running pre-flight checks
|
|
||||||
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
|
|
||||||
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
|
|
||||||
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
|
|
||||||
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
|
|
||||||
[kubelet-start] Starting the kubelet
|
|
||||||
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
|
|
||||||
[kubelet-check] The kubelet is healthy after 500.681999ms
|
|
||||||
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
|
|
||||||
|
|
||||||
This node has joined the cluster:
|
|
||||||
* Certificate signing request was sent to apiserver and a response was received.
|
|
||||||
* The Kubelet was informed of the new secure connection details.
|
|
||||||
|
|
||||||
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
|
|
||||||
|
|
||||||
"metallb" has been added to your repositories
|
|
||||||
"traefik" has been added to your repositories
|
|
||||||
"jetstack" has been added to your repositories
|
|
||||||
"longhorn" has been added to your repositories
|
|
||||||
Hang tight while we grab the latest from your chart repositories...
|
|
||||||
...Successfully got an update from the "metallb" chart repository
|
|
||||||
...Successfully got an update from the "longhorn" chart repository
|
|
||||||
...Successfully got an update from the "traefik" chart repository
|
|
||||||
...Successfully got an update from the "jetstack" chart repository
|
|
||||||
Update Complete. ⎈Happy Helming!⎈
|
|
||||||
NAME: metallb
|
|
||||||
LAST DEPLOYED: Sun Jun 29 23:53:49 2025
|
|
||||||
NAMESPACE: metallb-system
|
|
||||||
STATUS: deployed
|
|
||||||
REVISION: 1
|
|
||||||
TEST SUITE: None
|
|
||||||
NOTES:
|
|
||||||
MetalLB is now running in the cluster.
|
|
||||||
|
|
||||||
Now you can configure it via its CRs. Please refer to the metallb official docs
|
|
||||||
on how to use the CRs.
|
|
||||||
Waiting for all the pods to start...
|
|
||||||
pod/metallb-controller-5754956df6-6cjbg condition met
|
|
||||||
ipaddresspool.metallb.io/ip-pool created
|
|
||||||
l2advertisement.metallb.io/l2 created
|
|
||||||
NAME: traefik
|
|
||||||
LAST DEPLOYED: Sun Jun 29 23:55:52 2025
|
|
||||||
NAMESPACE: traefik
|
|
||||||
STATUS: deployed
|
|
||||||
REVISION: 1
|
|
||||||
TEST SUITE: None
|
|
||||||
NOTES:
|
|
||||||
traefik with docker.io/traefik:v3.4.1 has been deployed successfully on traefik namespace !
|
|
||||||
NAME: cert-manager
|
|
||||||
LAST DEPLOYED: Sun Jun 29 23:55:53 2025
|
|
||||||
NAMESPACE: cert-manager
|
|
||||||
STATUS: deployed
|
|
||||||
REVISION: 1
|
|
||||||
TEST SUITE: None
|
|
||||||
NOTES:
|
|
||||||
⚠️ WARNING: New default private key rotation policy for Certificate resources.
|
|
||||||
The default private key rotation policy for Certificate resources was
|
|
||||||
changed to `Always` in cert-manager >= v1.18.0.
|
|
||||||
Learn more in the [1.18 release notes](https://cert-manager.io/docs/releases/release-notes/release-notes-1.18).
|
|
||||||
|
|
||||||
cert-manager v1.18.1 has been deployed successfully!
|
|
||||||
|
|
||||||
In order to begin issuing certificates, you will need to set up a ClusterIssuer
|
|
||||||
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
|
|
||||||
|
|
||||||
More information on the different types of issuers and how to configure them
|
|
||||||
can be found in our documentation:
|
|
||||||
|
|
||||||
https://cert-manager.io/docs/configuration/
|
|
||||||
|
|
||||||
For information on how to configure cert-manager to automatically provision
|
|
||||||
Certificates for Ingress resources, take a look at the `ingress-shim`
|
|
||||||
documentation:
|
|
||||||
|
|
||||||
https://cert-manager.io/docs/usage/ingress/
|
|
||||||
secret/cloudflare-api-token-secret created
|
|
||||||
clusterissuer.cert-manager.io/cloudflare-clusterissuer created
|
|
||||||
NAME: longhorn
|
|
||||||
LAST DEPLOYED: Sun Jun 29 23:56:13 2025
|
|
||||||
NAMESPACE: longhorn-system
|
|
||||||
STATUS: deployed
|
|
||||||
REVISION: 1
|
|
||||||
TEST SUITE: None
|
|
||||||
NOTES:
|
|
||||||
Longhorn is now installed on the cluster!
|
|
||||||
|
|
||||||
Please wait a few minutes for other Longhorn components such as CSI deployments, Engine Images, and Instance Managers to be initialized.
|
|
||||||
|
|
||||||
Visit our documentation at https://longhorn.io/docs/
|
|
||||||
Waiting for all the pods to start...
|
|
||||||
pod/longhorn-driver-deployer-54599c447c-n2p4k condition met
|
|
||||||
pod/longhorn-manager-f9jlp condition met
|
|
||||||
pod/longhorn-manager-xxctk condition met
|
|
||||||
pod/longhorn-ui-6b44759848-jnkpw condition met
|
|
||||||
pod/longhorn-ui-6b44759848-sltk5 condition met
|
|
||||||
certificate.cert-manager.io/longhorn-ssl-certificate created
|
|
||||||
ingress.networking.k8s.io/longhorn-ingress created
|
|
@ -4,7 +4,7 @@ metadata:
|
|||||||
name: cloudflare-clusterissuer
|
name: cloudflare-clusterissuer
|
||||||
spec:
|
spec:
|
||||||
acme:
|
acme:
|
||||||
email: kevin@nixit.it
|
email: ${ACME_MAIL}
|
||||||
server: https://acme-v02.api.letsencrypt.org/directory
|
server: https://acme-v02.api.letsencrypt.org/directory
|
||||||
privateKeySecretRef:
|
privateKeySecretRef:
|
||||||
name: cloudflare-clusterissuer-key
|
name: cloudflare-clusterissuer-key
|
||||||
|
@ -10,4 +10,4 @@ spec:
|
|||||||
name: cloudflare-clusterissuer
|
name: cloudflare-clusterissuer
|
||||||
kind: ClusterIssuer
|
kind: ClusterIssuer
|
||||||
dnsNames:
|
dnsNames:
|
||||||
- longhorn.nixit.it
|
- ${LONGHORN_DNS_NAME}
|
@ -8,10 +8,10 @@ metadata:
|
|||||||
spec:
|
spec:
|
||||||
tls:
|
tls:
|
||||||
- hosts:
|
- hosts:
|
||||||
- longhorn.nixit.it
|
- ${LONGHORN_DNS_NAME}
|
||||||
secretName: longhorn-ssl-certificate
|
secretName: longhorn-ssl-certificate
|
||||||
rules:
|
rules:
|
||||||
- host: longhorn.nixit.it
|
- host: ${LONGHORN_DNS_NAME}
|
||||||
http:
|
http:
|
||||||
paths:
|
paths:
|
||||||
- path: /
|
- path: /
|
||||||
|
@ -5,7 +5,7 @@ metadata:
|
|||||||
namespace: metallb-system
|
namespace: metallb-system
|
||||||
spec:
|
spec:
|
||||||
addresses:
|
addresses:
|
||||||
- 10.50.3.30-10.50.3.40
|
- ${IP_POOL_RANGE}
|
||||||
|
|
||||||
---
|
---
|
||||||
apiVersion: metallb.io/v1beta1
|
apiVersion: metallb.io/v1beta1
|
||||||
|
@ -1,8 +1,3 @@
|
|||||||
service:
|
|
||||||
spec:
|
|
||||||
externalTrafficPolicy: Local
|
|
||||||
loadBalancerIP: 10.50.3.30
|
|
||||||
|
|
||||||
ports:
|
ports:
|
||||||
web:
|
web:
|
||||||
redirections:
|
redirections:
|
||||||
|
127
redbutton.sh
127
redbutton.sh
@ -1,4 +1,127 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
bash ./init-deploy.sh
|
source .env
|
||||||
bash ./helm-deploy.sh
|
KUBE_USER="ubuntu"
|
||||||
|
PRIVATE_KEY_PATH="$HOME/.ssh/id_ed25519"
|
||||||
|
KUBE_CP="k8s-control-plane"
|
||||||
|
KUBE_W1="k8s-worker1"
|
||||||
|
KUBE_W2="k8s-worker2"
|
||||||
|
ANSIBLE_DIR="ansible"
|
||||||
|
POD_NET_CIDR="192.168.0.0/16"
|
||||||
|
CNI_VER="v3.30.2"
|
||||||
|
API_ADDR=$(getent hosts "$KUBE_CP" | awk '{ print $1 }')
|
||||||
|
LOAD_BALANCER_IP="10.50.3.30"
|
||||||
|
|
||||||
|
## Variables for envsubst
|
||||||
|
|
||||||
|
export ACME_MAIL="kevin@nixit.it"
|
||||||
|
export IP_POOL_RANGE="10.50.3.30-10.50.3.40"
|
||||||
|
export LONGHORN_DNS_NAME="longhorn.nixit.it"
|
||||||
|
|
||||||
|
## Prepare all of the nodes with k8s using the ansible playbooks I prepared
|
||||||
|
|
||||||
|
ansible-playbook ./"$ANSIBLE_DIR"/master-k8s.yaml \
|
||||||
|
-i "$KUBE_CP,$KUBE_W1,$KUBE_W2," \
|
||||||
|
-e "k8s-control-plane=["$KUBE_CP"] k8s_nodes=["$KUBE_W1","$KUBE_W2"]" \
|
||||||
|
-u "$KUBE_USER" \
|
||||||
|
--private-key "$PRIVATE_KEY_PATH"
|
||||||
|
|
||||||
|
## Bootstrap the cluster
|
||||||
|
|
||||||
|
ssh "$KUBE_USER@$KUBE_CP" \
|
||||||
|
"sudo kubeadm init \
|
||||||
|
--pod-network-cidr=$POD_NET_CIDR \
|
||||||
|
--apiserver-advertise-address=$API_ADDR \
|
||||||
|
--cri-socket unix:///run/containerd/containerd.sock"
|
||||||
|
|
||||||
|
ssh "$KUBE_USER@$KUBE_CP" \
|
||||||
|
"sudo cp /etc/kubernetes/admin.conf /tmp/config && sudo chown $KUBE_USER:$KUBE_USER /tmp/config"
|
||||||
|
|
||||||
|
## Fetch the configuration from the freshly installed cluster. BEWARE THAT ANY EXISTING CONFIG WILL BE OVERWRITTEN
|
||||||
|
|
||||||
|
scp "$KUBE_USER@$KUBE_CP":/tmp/config "$HOME"/.kube/config
|
||||||
|
|
||||||
|
## Install the CNI
|
||||||
|
|
||||||
|
kubectl apply -f \
|
||||||
|
"https://raw.githubusercontent.com/projectcalico/calico/$CNI_VER/manifests/calico.yaml"
|
||||||
|
|
||||||
|
## Now join the workers to the cluster
|
||||||
|
|
||||||
|
JOIN_TOKEN=$(ssh "$KUBE_USER@$KUBE_CP" \
|
||||||
|
"sudo kubeadm token create --print-join-command")
|
||||||
|
|
||||||
|
for NODE in "$KUBE_W1" "$KUBE_W2"; do
|
||||||
|
echo "Joining $NODE"
|
||||||
|
ssh "$KUBE_USER@$NODE" "sudo $JOIN_TOKEN"
|
||||||
|
done
|
||||||
|
|
||||||
|
### NOW APPLYING HELMS ###
|
||||||
|
|
||||||
|
## Setup all the needed repos
|
||||||
|
helm repo add metallb https://metallb.github.io/metallb
|
||||||
|
helm repo add traefik https://traefik.github.io/charts
|
||||||
|
helm repo add jetstack https://charts.jetstack.io
|
||||||
|
helm repo add longhorn https://charts.longhorn.io
|
||||||
|
helm repo update
|
||||||
|
|
||||||
|
## Let's install metallb!
|
||||||
|
|
||||||
|
helm install \
|
||||||
|
metallb metallb/metallb \
|
||||||
|
-n metallb-system \
|
||||||
|
--create-namespace
|
||||||
|
|
||||||
|
echo "Waiting for all the pods to start..."
|
||||||
|
|
||||||
|
kubectl wait --namespace metallb-system \
|
||||||
|
--for=condition=Ready pod \
|
||||||
|
--all \
|
||||||
|
--timeout=200s
|
||||||
|
|
||||||
|
## Apply the load-balancer IPs range
|
||||||
|
|
||||||
|
envsubst < ./manifests/metallb/metallb-config.yaml | kubectl apply -f -
|
||||||
|
|
||||||
|
## Let's install traefik!
|
||||||
|
|
||||||
|
helm install \
|
||||||
|
traefik traefik/traefik \
|
||||||
|
-n traefik \
|
||||||
|
-f ./manifests/traefik/values.yaml \
|
||||||
|
--set service.spec.loadBalancerIP="$LOAD_BALANCER_IP" \
|
||||||
|
--create-namespace
|
||||||
|
|
||||||
|
## Let's install Cert-Manager!
|
||||||
|
|
||||||
|
helm install \
|
||||||
|
cert-manager jetstack/cert-manager \
|
||||||
|
-n cert-manager \
|
||||||
|
-f ./manifests/certmanager/values.yaml \
|
||||||
|
--create-namespace
|
||||||
|
|
||||||
|
## Set up the cloudflare API token secret
|
||||||
|
|
||||||
|
kubectl create secret generic cloudflare-api-token-secret \
|
||||||
|
--from-literal=api-token="$CF_API" \
|
||||||
|
-n=cert-manager
|
||||||
|
|
||||||
|
## Apply the cloudflare Issuer
|
||||||
|
|
||||||
|
envsubst < ./manifests/certmanager/issuer.yaml | kubectl apply -f -
|
||||||
|
|
||||||
|
## Let's install longhorn!
|
||||||
|
|
||||||
|
helm install longhorn longhorn/longhorn -n longhorn-system --create-namespace
|
||||||
|
|
||||||
|
echo "Waiting for all the pods to start..."
|
||||||
|
|
||||||
|
kubectl wait --namespace longhorn-system \
|
||||||
|
--for=condition=Ready pod \
|
||||||
|
--all \
|
||||||
|
--timeout=120s
|
||||||
|
|
||||||
|
## Apply ingress rule and certificate
|
||||||
|
|
||||||
|
envsubst < ./manifests/longhorn/certificate.yaml | kubectl apply -f -
|
||||||
|
envsubst < ./manifests/longhorn/ingress.yaml | kubectl apply -f -
|
Reference in New Issue
Block a user