kubernetes 初始化集群master+node(四)

发表于 Kubernetes 分类,标签:

# 准备正式开始安装


# 修改主机名称(多台主机依次修改)

hostnamectl --static set-hostname master01
hostnamectl --static set-hostname master02
hostnamectl --static set-hostname master03
hostnamectl --static set-hostname worker01
hostnamectl --static set-hostname worker02
hostnamectl --static set-hostname worker03
hostnamectl --static set-hostname worker04
hostnamectl --static set-hostname worker05

# 添加hosts

cat >> /etc/hosts <<EOF
192.168.1.171 master01
192.168.1.172 master02
192.168.1.173 master03
192.168.1.181 worker01
192.168.1.182 worker02
192.168.1.183 worker03
192.168.1.184 worker04
192.168.1.185 worker05
EOF


# 准备初始化集群(只需要在其中一台master上执行即可,这里在192.168.1.171上执行)

生成集群默认初始化配置文件

kubeadm config print init-defaults > kubeadm-init-config.yaml

修改为如下内容

cat kubeadm-init-config.yaml
# kubeadm config print init-defaults
# https://www.guojingyi.cn/912.html
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.171
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  # extraArgs:
  #   authorization-mode: Node,RBAC
  certSANs:
  - "192.168.1.170"
  - "192.168.1.171"
  - "192.168.1.172"
  - "192.168.1.173"
  - "127.0.0.1"
  - localhost
  - master01
  - master02
  - master03
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
controlPlaneEndpoint: "192.168.1.171:6443"
kind: ClusterConfiguration
kubernetesVersion: v1.16.15
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
ipvs:
  strictARP: true

注意: 192.168.1.170是VIP,负载均衡ip或者其它类似作用的ip,文件里面的192.168.1.171理论上来说应该换成192.168.1.170在前面做负载均衡和高可用


初始化集群

kubeadm init --config=kubeadm-init-config.yaml --upload-certs  


# kubeadm init --pod-network-cidr=10.244.0.0/16



Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
  
  kubeadm join 192.168.1.171:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:2a1cd7e5b01f1cc95ddf9823d3f288f3fe4e6ff2ff9891a6ae302a1e863ac90d \
    --control-plane --certificate-key 8e032e22ce1c539b7c6fd838f21fe59bf677d3089f43f88525f07cf9adbfae44
    
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.171:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:2a1cd7e5b01f1cc95ddf9823d3f288f3fe4e6ff2ff9891a6ae302a1e863ac90d



# 其它节点加入集群有证书错误,可以将证书拷贝到其他机器,如下命令,然后再执行join命令

scp -rp /etc/kubernetes/pki/ca.* master-117:/etc/kubernetes/pki
scp -rp /etc/kubernetes/pki/sa.* master-117:/etc/kubernetes/pki
scp -rp /etc/kubernetes/pki/front-proxy-ca.* master-117:/etc/kubernetes/pki
scp -rp /etc/kubernetes/pki/etcd/ca.* master-117:/etc/kubernetes/pki/etcd
scp -rp /etc/kubernetes/admin.conf master-117:/etc/kubernetes


其他两个master节点加入集群(192.168.1.172,192.168.1.173)

master加入集群命令如下

kubeadm join 192.168.1.171:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:2a1cd7e5b01f1cc95ddf9823d3f288f3fe4e6ff2ff9891a6ae302a1e863ac90d \
    --control-plane --certificate-key 8e032e22ce1c539b7c6fd838f21fe59bf677d3089f43f88525f07cf9adbfae44


所有node节点加入集群(192.168.1.181,192.168.1.182,192.168.1.183,192.168.1.184,192.168.1.185)

node加入集群命令如下

kubeadm join 192.168.1.171:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:2a1cd7e5b01f1cc95ddf9823d3f288f3fe4e6ff2ff9891a6ae302a1e863ac90d


注意:如上token具有默认24小时的有效期,token和hash值可通过如下方式获取:
kubeadm token list
如果 Token 过期以后,可以输入以下命令,生成新的 Token:

kubeadm token create
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'



初始化完成,复制配置文件到默认目录

[root@master01 ~]# mkdir -p $HOME/.kube
[root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

#设置KUBECONFIG环境变量
cat << EOF >> ~/.bashrc
export KUBECONFIG=$HOME/.kube/config
EOF

echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc

附加:初始化过程大致步骤如下:
    1、[kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
    2、[certificates]生成相关的各种证书
    3、[kubeconfig]生成相关的kubeconfig文件
    4、[bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

### 安装flannel

kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

# 如果kubeadm-init-config.yaml里面的pod网段自定义为非10.244.0.0/16网段,这里kube-flannel.yml文件里面要修改对应网段和自定义网段一致

kubectl apply -f 
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

# This file does not bundle RBAC permissions. If you need those, run


设置标签

[root@master01 ~]# kubectl taint nodes --all node-role.kubernetes.io/master- #允许master部署应用

提示:部署完内部应用后可使用kubectl taint node master01 node-role.kubernetes.io/master="":NoSchedule重新设置Master为Master Only 状态。



开启kubelet只读端口用于监控


```bash

echo 'KUBELET_EXTRA_ARGS="--read-only-port=10255"' >> /etc/sysconfig/kubelet

systemctl restart kubelet.service


0 篇评论

发表我的评论