k8s集群搭建日志
搭建集群基于VMware-15、CentOS-7
集群信息:master:内存2G 2核
node1、node2:内存1G 1核
docker安装:https://blog.csdn.net/li1325169021/article/details/90780627
k8s安装:
日志如下:
# 设置永久主机名
[test@test ~]$ sudo hostname master [test@test ~]$ hostnamectl set-hostname master
# 相关配置(防火墙等) [test@test ~]$ systemctl stop firewalld [test@test ~]$ systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[test@test ~]$ sudo setenforce 0 [test@test ~]$ sudo swapoff -a [test@test ~]$ sudo vim /etc/fstab [test@test ~]$ sudo vim /etc/sysctl.d/k8s.conf
# 增加以下内容: # net.bridge.bridge-nf-call-ip6tables = 1 # net.bridge.bridge-nf-call-iptables = 1 # net.ipv4.ip_forward = 1 [test@test ~]$ sudo modprobe br_netfilter [test@test ~]$ sudo sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
# docker启动设置 [test@test ~]$ sudo service docker start Redirecting to /bin/systemctl start docker.service [test@test ~]$ sudo systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
# 配置阿里云镜像加速器:
[test@test ~]$ sudo mkdir -p /etc/docker [test@test ~]$ sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://obww7jh1.mirror.aliyuncs.com"] } EOF { "registry-mirrors": ["https://obww7jh1.mirror.aliyuncs.com"] } [test@test ~]$ systemctl daemon-reload [test@test ~]$ systemctl restart docker
# 安装kubernetes [test@test ~]$ sudo vim /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg [test@test ~]$ sudo yum list -y kubeadm --showduplicates BDB2053 Freeing read locks for locker 0x15bf: 9922/139952141547328 BDB2053 Freeing read locks for locker 0x15c1: 9922/139952141547328 已加载插件:fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirrors.ustc.edu.cn * extras: mirrors.ustc.edu.cn * updates: mirrors.ustc.edu.cn kubernetes/signature | 844 B 00:00:00 从 https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 检索密钥 导入 GPG key 0xA7317B0F: 用户ID : "Google Cloud Packages Automatic Signing Key" 指纹 : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f 来自 : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 从 https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 检索密钥 kubernetes/signature | 1.4 kB 00:00:00 !!! kubernetes/primary | 85 kB 00:00:01 kubernetes 624/624 可安装的软件包 kubeadm.x86_64 1.6.0-0 kubernetes kubeadm.x86_64 1.20.2-0 kubernetes [test@test ~]$ sudo yum install -y kubeadm-1.18.0-0 kubectl-1.18.0-0 kubelet-1.18.0-0 已加载插件:fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirrors.ustc.edu.cn * extras: mirrors.ustc.edu.cn * updates: mirrors.ustc.edu.cn 正在解决依赖关系 --> 正在检查事务 ---> 软件包 kubeadm.x86_64.0.1.18.0-0 将被 安装 --> 正在处理依赖关系 kubernetes-cni >= 0.7.5,它被软件包 kubeadm-1.18.0-0.x86_64 需要 --> 正在处理依赖关系 cri-tools >= 1.13.0,它被软件包 kubeadm-1.18.0-0.x86_64 需要 ---> 软件包 kubectl.x86_64.0.1.18.0-0 将被 安装 ---> 软件包 kubelet.x86_64.0.1.18.0-0 将被 安装 --> 正在处理依赖关系 socat,它被软件包 kubelet-1.18.0-0.x86_64 需要 --> 正在处理依赖关系 conntrack,它被软件包 kubelet-1.18.0-0.x86_64 需要 --> 正在检查事务 ---> 软件包 conntrack-tools.x86_64.0.1.4.4-7.el7 将被 安装 --> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要 --> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要 --> 正在处理依赖关系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要 --> 正在处理依赖关系 libnetfilter_queue.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要 --> 正在处理依赖关系 libnetfilter_cttimeout.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要 --> 正在处理依赖关系 libnetfilter_cthelper.so.0()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要 ---> 软件包 cri-tools.x86_64.0.1.13.0-0 将被 安装 ---> 软件包 kubernetes-cni.x86_64.0.0.8.7-0 将被 安装 ---> 软件包 socat.x86_64.0.1.7.3.2-2.el7 将被 安装 --> 正在检查事务 ---> 软件包 libnetfilter_cthelper.x86_64.0.1.0.0-11.el7 将被 安装 ---> 软件包 libnetfilter_cttimeout.x86_64.0.1.0.0-7.el7 将被 安装 ---> 软件包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安装 --> 解决依赖关系完成 依赖关系解决 ======================================================================================= Package 架构 版本 源 大小 ======================================================================================= 正在安装: kubeadm x86_64 1.18.0-0 kubernetes 8.8 M kubectl x86_64 1.18.0-0 kubernetes 9.5 M kubelet x86_64 1.18.0-0 kubernetes 21 M 为依赖而安装: conntrack-tools x86_64 1.4.4-7.el7 base 187 k cri-tools x86_64 1.13.0-0 kubernetes 5.1 M kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k socat x86_64 1.7.3.2-2.el7 base 290 k 事务概要 ======================================================================================= 安装 3 软件包 (+7 依赖软件包) 总下载量:63 M 安装大小:266 M Downloading packages: (1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm | 187 kB 00:00:00 warning: /var/cache/yum/x86_64/7/kubernetes/packages/14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm 的公钥尚未安装 (2/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4eba | 5.1 MB 00:00:03 (3/10): 2d6ec4ae24a355c5818174f39e212f116cbd796cabcc113a68fc613 | 8.8 MB 00:00:03 (4/10): cf6754a3497c5c05de050f2409eb3b2854467967cf359a8ed9c6e6c | 9.5 MB 00:00:01 (5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm | 18 kB 00:00:00 (6/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:00 (7/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:00 (8/10): 3d1298e3f34961565204febc5da169d1ac3673b7eb772a7bc19c2b3 | 21 MB 00:00:02 (9/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm | 18 kB 00:00:01 (10/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b187 | 19 MB 00:00:02 --------------------------------------------------------------------------------------- 总计 8.9 MB/s | 63 MB 00:07 从 https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 检索密钥 导入 GPG key 0xA7317B0F: 用户ID : "Google Cloud Packages Automatic Signing Key " 指纹 : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f 来自 : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 导入 GPG key 0xBA07F4FB: 用户ID : "Google Cloud Packages Automatic Signing Key " 指纹 : 54a6 47f9 048d 5688 d7da 2abe 6a03 0b21 ba07 f4fb 来自 : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 从 https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 检索密钥 导入 GPG key 0x3E1BA8D5: 用户ID : "Google Cloud Packages RPM Signing Key " 指纹 : 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5 来自 : https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg Running transaction check Running transaction test Transaction test succeeded Running transaction 正在安装 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 1/10 正在安装 : socat-1.7.3.2-2.el7.x86_64 2/10 正在安装 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 3/10 正在安装 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 4/10 正在安装 : conntrack-tools-1.4.4-7.el7.x86_64 5/10 正在安装 : kubernetes-cni-0.8.7-0.x86_64 6/10 正在安装 : kubelet-1.18.0-0.x86_64 7/10 正在安装 : cri-tools-1.13.0-0.x86_64 8/10 正在安装 : kubectl-1.18.0-0.x86_64 9/10 正在安装 : kubeadm-1.18.0-0.x86_64 10/10 验证中 : conntrack-tools-1.4.4-7.el7.x86_64 1/10 验证中 : kubernetes-cni-0.8.7-0.x86_64 2/10 验证中 : kubectl-1.18.0-0.x86_64 3/10 验证中 : cri-tools-1.13.0-0.x86_64 4/10 验证中 : libnetfilter_queue-1.0.2-2.el7_2.x86_64 5/10 验证中 : kubelet-1.18.0-0.x86_64 6/10 验证中 : kubeadm-1.18.0-0.x86_64 7/10 验证中 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 8/10 验证中 : socat-1.7.3.2-2.el7.x86_64 9/10 验证中 : libnetfilter_cthelper-1.0.0-11.el7.x86_64 10/10 已安装: kubeadm.x86_64 0:1.18.0-0 kubectl.x86_64 0:1.18.0-0 kubelet.x86_64 0:1.18.0-0 作为依赖被安装: conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.13.0-0 kubernetes-cni.x86_64 0:0.8.7-0 libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 socat.x86_64 0:1.7.3.2-2.el7 完毕!
# master结点初始化(192.168.52.10为master的ip) [test@test ~]$ sudo kubeadm init --kubernetes-version=1.18.0 --apiserver-advertise-address=192.168.52.10 --image-repository=registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16 W0203 17:04:10.607365 10925 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03 [WARNING Hostname]: hostname "master" could not be reached [WARNING Hostname]: hostname "master": lookup master on 8.8.8.8:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.52.10] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.52.10 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.52.10 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0203 17:06:33.773401 10925 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0203 17:06:33.774214 10925 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 18.503668 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: bkneey.oweomuuhjt6bsnd9 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.52.10:6443 --token bkneey.oweomuuhjt6bsnd9 \ --discovery-token-ca-cert-hash sha256:22cfff6aa48bcd7edb5d5a4037517f849086f44928a97f2641b7aecdfcaedb91 [test@test ~]$ mkdir -p $HOME/.kube [test@test ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [test@test ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config [test@test ~]$ sudo wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml --2021-02-03 19:11:27-- https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml 正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ... 正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... 已连接。 无法建立 SSL 连接。
# 无法建立SSL连接:因为网站被墙了 需要改配置 [test@test ~]$ sudo vim /etc/hosts # 文件中加一行:199.232.68.133 raw.githubusercontent.com [test@test ~]$ sudo wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml --2021-02-03 19:27:30-- https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml 正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 199.232.68.133 正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|199.232.68.133|:443... 已连接。 已发出 HTTP 请求,正在等待回应... 200 OK 长度:14416 (14K) [text/plain] 正在保存至: “kube-flannel.yml” 100%[======================================>] 14,416 11.5KB/s 用时 1.2s 2021-02-03 19:27:34 (11.5 KB/s) - 已保存 “kube-flannel.yml” [14416/14416]) [test@test ~]$ sudo vim kube-flannel.yml # 打开文件,将文件中所有 quay.io 修改为 quay-mirror.qiniu.com (https://blog.csdn.net/zsd498537806/article/details/85157560) [test@test ~]$ kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
# 查看集群信息 [test@test ~]$ kubectl cluster-info Kubernetes master is running at https://192.168.52.10:6443 KubeDNS is running at https://192.168.52.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
# 查看结点信息(此时master结点为NotReady) [test@test ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 29m v1.18.0
# 查看Pods信息 [test@test ~]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7ff77c879f-5vsp7 0/1 Pending 0 29m kube-system coredns-7ff77c879f-q8dsj 0/1 Pending 0 29m kube-system etcd-master 1/1 Running 0 30m kube-system kube-apiserver-master 1/1 Running 0 30m kube-system kube-controller-manager-master 1/1 Running 0 30m kube-system kube-flannel-ds-amd64-sc4fj 0/1 Init:ErrImagePull 0 67s kube-system kube-proxy-4tj2h 1/1 Running 0 29m kube-system kube-scheduler-master 1/1 Running 0 30m
# 安装CNI [test@test ~]$ sudo mkdir -p /etc/cni/net.d [test@test ~]$ sudo bash -c 'cat >/etc/cni/net.d/10-mynet.conf <> { > "cniVersion": "0.2.0", > "name": "mynet", > "type": "bridge", > "bridge": "cni0", > "isGateway": true, > "ipMasq": true, > "ipam": { > "type": "host-local", > "subnet": "10.22.0.0/16", > "routes": [ > { "dst": "0.0.0.0/0" } > ] > } > } > EOF' [test@test ~]$ sudo bash -c 'cat >/etc/cni/net.d/99-loopback.conf < > { > "cniVersion": "0.2.0", > "name": "lo", > "type": "loopback" > } > EOF'
# NotReady->Ready [test@test net.d]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 15h v1.18.0
# node1结点加入 [test@test net.d]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 16h v1.18.0 node1 Ready7m25s v1.18.0
# 重新生成token [test@test net.d]$ kubeadm token create --print-join-command W0204 12:46:38.272906 100303 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] kubeadm join 192.168.52.10:6443 --token otfgqs.ltwuqu50iowaw1gn --discovery-token-ca-cert-hash sha256:c7ce25107ba16bacd860f984d88971aef7eb57823859de50e7b39e10cca2f592
# node2结点加入
[test@test net.d]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 17h v1.18.0 node1 NotReady105m v1.18.0 node2 Ready 4m54s v1.18.0 # node1结点:
# 一开始因为令牌无效 加入失败: [node1@node1 ~]$ sudo kubeadm join 192.168.52.10:6443 --token bkneey.oweomuuhjt6bsnd9 --discovery-token-ca-cert-hash sha256:22cfff6aa48bcd7edb5d5a4037517f849086f44928a97f2641b7aecdfcaedb91 W0204 10:39:21.279390 10538 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03 error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "bkneey" To see the stack trace of this error execute with --v=5 or higher # 内存炸了 显示out of memory 俺也不知道啥原因 可能和主机内存也有关系
# 直接重启了
# 使用重新生成的令牌加入 [node1@node1 ~]$ sudo kubeadm join 192.168.52.10:6443 --token 18tr9y.mzky9makbta1re7s --discovery-token-ca-cert-hash sha256:c7ce25107ba16bacd860f984d88971aef7eb57823859de50e7b39e10cca2f592 W0204 11:17:22.828906 9274 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
参考:
1.提示有一进程正在使用yum锁 直接kill -s 9 端口号 2.cat操作权限问题 sudo bash -c 'action' 3.token重新获取