debian11换源
https://developer.aliyun.com/mirror/debian?spm=a2c6h.13651102.0.0.3e221b11WzD4VR
cat > /etc/apt/sources.list <<EOF deb http://mirrors.aliyun.com/debian/ bullseye main non-free contrib deb-src http://mirrors.aliyun.com/debian/ bullseye main non-free contrib deb http://mirrors.aliyun.com/debian-security/ bullseye-security main deb-src http://mirrors.aliyun.com/debian-security/ bullseye-security main deb http://mirrors.aliyun.com/debian/ bullseye-updates main non-free contrib deb-src http://mirrors.aliyun.com/debian/ bullseye-updates main non-free contrib deb http://mirrors.aliyun.com/debian/ bullseye-backports main non-free contrib deb-src http://mirrors.aliyun.com/debian/ bullseye-backports main non-free contrib EOF
安装kubernetes相关组件,kubeadm,kubelet,kubectl
https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.58931b11Kc1cDR
apt-get update apt-get install -y gnupg2 apt-get install -y apt-transport-https ca-certificates curl curl -s https://gitee.com/thepoy/k8s/raw/master/apt-key.gpg | apt-key add - cat <<EOF | tee /etc/apt/sources.list.d/kubernetes.list deb https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial main EOF
安装最新版、或者指定版本安装
apt-get update apt-cache search kubeadm apt-cache madison kubeadm apt-cache madison kubeadm | grep 1.22 apt-get install kubeadm=1.22.3-00 apt-get install kubelet=1.22.3-00 apt-get install kubectl=1.22.3-00
验证安装版本
二、安装Containerd、或指定版本安装
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - echo "deb [arch=amd64] https://download.docker.com/linux/debian bullseye stable" | tee /etc/apt/sources.list.d/docker.list apt-get update apt-cache search containerd apt-cache madison containerd apt-cache madison containerd | grep 1.4.5 apt-get install containerd=1.4.5~ds1-2
三、集群其它初始化设置
yum install ipset ipvsadm -y
https://www.jianshu.com/p/5cfa19eecd5d
cat > /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF
modprobe overlay modprobe br_netfilter
https://www.cnblogs.com/wayne91/p/15438275.html
# Setup required sysctl params, these persist across reboots. cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
sysctl --system
# Configure containerd mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml sed -i "s#https://registry-1.docker.io#https://bncakk5o.mirror.aliyuncs.com#g" /etc/containerd/config.toml
# Restart containerd systemctl restart containerd
# config kubelet cgroup cat > /etc/default/kubelet <<EOF KUBELET_EXTRA_ARGS=--cgroup-driver=systemd EOF
# config CRI cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
四、初始化集群
cat > /root/kubeadm-init.yaml <<EOF # kubeadm config print init-defaults # https://www.guojingyi.cn/912.html apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.221 bindPort: 6443 nodeRegistration: # criSocket: /var/run/dockershim.sock criSocket: /run/containerd/containerd.sock name: master221 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: # extraArgs: # authorization-mode: Node,RBAC certSANs: - "192.168.1.221" - "127.0.0.1" - localhost - master221 - vpn.idx.ee - kubernetes - kubernetes.default - kubernetes.default.svc - kubernetes.default.svc.cluster - kubernetes.default.svc.cluster.local timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers controlPlaneEndpoint: "192.168.1.221:6443" kind: ClusterConfiguration kubernetesVersion: v1.22.3 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs ipvs: strictARP: true --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd EOF
cat >> /etc/hosts <<EOF 192.168.1.221 master221 192.168.1.222 worker222 192.168.1.223 worker223 192.168.1.224 worker224 192.168.1.225 worker225 EOF
kubeadm init --config=kubeadm-init.yaml --upload-certs [init] Using Kubernetes version: v1.22.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local localhost master221 vpn.idx.ee] and IPs [10.96.0.1 192.168.1.221 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master221] and IPs [192.168.1.221 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master221] and IPs [192.168.1.221 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 28.505725 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: befa2dfb8d0e897438fb75c5fb3cf8176f0404b64dd797655c223b2489c124f4 [mark-control-plane] Marking the node master221 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node master221 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.1.221:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:a117ae16bc34c34c413f2fa12aa1fd9875ca0e33d17260c423601537c9350c08 \ --control-plane --certificate-key befa2dfb8d0e897438fb75c5fb3cf8176f0404b64dd797655c223b2489c124f4 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.221:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:a117ae16bc34c34c413f2fa12aa1fd9875ca0e33d17260c423601537c9350c08
root@master221:~# kubectl get nodes NAME STATUS ROLES AGE VERSION master221 NotReady control-plane,master 2m v1.22.3 worker222 NotReady <none> 13s v1.22.3 worker223 NotReady <none> 10s v1.22.3 worker224 NotReady <none> 7s v1.22.3 worker225 NotReady <none> 7s v1.22.3
五、安装网络插件
helm repo add cilium https://helm.cilium.io/ helm install cilium cilium/cilium --version 1.10.5 --namespace kube-system
六、安装metrics-server
https://github.com/kubernetes-sigs/metrics-server/tree/master/charts/metrics-server
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/ helm upgrade --install metrics-server metrics-server/metrics-server
registry.cn-shenzhen.aliyuncs.com/stonek8s/metrics-server:v0.5.1
-
« 上一篇:
debian11 安装