kubesphere 安装以及简单使用(一)


参考: https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/

本次安装的版本是: kubesphere3.2.1

1. 安装

1. 前置条件

1. kubernetes 版本大于1.19.x

我的版本如下:

[root@k8smaster01 storageclass]# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
[root@k8smaster01 storageclass]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:03:28Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}

2. cpu > 1核, 内存 > 2GB

3. 安装之前,需要配置集群中的默认的storageclass (参考)

2. 安装

执行如下命令:

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

安装过程中遇到一些难以处理的错误,解决办法:

1. 错误一:

failed: [localhost] (item={'ns': 'kubesphere-system', 'kind': 'users.iam.kubesphere.io', 'resource': 'admin', 'release': 'ks-core'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "/usr/local/bin/kubectl -n kubesphere-system annotate --overwrite users.iam.kubesphere.io admin meta.helm.sh/release-name=ks-core && /usr/local/bin/kubectl -n kubesphere-system annotate --overwrite users.iam.kubesphere.io admin meta.helm.sh/release-namespace=kubesphere-system && /usr/local/bin/kubectl -n kubesphere-system label --overwrite users.iam.kubesphere.io admin app.kubernetes.io/managed-by=Helm\n", "delta": "0:00:00.675675", "end": "2022-02-10 04:53:09.022419", "failed_when_result": true, "item": {"kind": "users.iam.kubesphere.io", "ns": "kubesphere-system", "release": "ks-core", "resource": "admin"}, "msg": "non-zero return code", "rc": 1, "start": "2022-02-10 04:53:08.346744", "stderr": "Error from server (InternalError): Internal error occurred: failed calling webhook \"users.iam.kubesphere.io\": Post \"https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=30s\": service \"ks-controller-manager\" not found", "stderr_lines": ["Error from server (InternalError): Internal error occurred: failed calling webhook \"users.iam.kubesphere.io\": Post \"https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=30s\": service \"ks-controller-manager\" not found"], "stdout": "", "stdout_lines": []}

解决办法:

参考   https://github.com/kubesphere/ks-installer/blob/master/scripts/kubesphere-delete.sh 将sh文件下载到master节点,然后删除后重新安装

2. 错误二:

报错:

Failed to ansible-playbook result-info.yaml

现象: 可以访问 30880 端口,并且svc 也建立成功, 登录报错如下:

 解决办法:

(1) 查看pods 如下:

[root@k8smaster02 ~]# kubectl get pods -n kubesphere-system
NAME                                     READY   STATUS             RESTARTS   AGE
ks-apiserver-5866f585fc-6plkr            0/1     CrashLoopBackOff   7          14m
ks-apiserver-5866f585fc-jcwpq            0/1     CrashLoopBackOff   8          21m
ks-console-65f4d44d88-9qwwz              1/1     Running            0          29m
ks-console-65f4d44d88-hq5pd              1/1     Running            0          29m
ks-controller-manager-754947b99b-mvdmz   1/1     Running            0          21m
ks-controller-manager-754947b99b-zrmj7   1/1     Running            0          22m
ks-installer-85dcfff87d-4qp8v            1/1     Running            0          34m
redis-ha-haproxy-868fdbddd4-j2ttx        1/1     Running            0          32m
redis-ha-haproxy-868fdbddd4-qpvj7        1/1     Running            0          32m
redis-ha-haproxy-868fdbddd4-zffzq        1/1     Running            0          32m
redis-ha-server-0                        0/2     Pending            0          32m

查看pod 启动失败原因:

[root@k8smaster02 ~]# kubectl logs -n kubesphere-system ks-apiserver-5866f585fc-6plkr
W0210 21:25:00.736862       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0210 21:25:00.741192       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0210 21:25:00.760218       1 metricsserver.go:238] Metrics API not available.
Error: failed to connect to redis service, please check redis status, error: EOF
2022/02/10 21:25:00 failed to connect to redis service, please check redis status, error: EOF

  可以看到是redis 没起来,再次查看redis 没起来原因:

[root@k8smaster02 ~]# kubectl describe pods -n kubesphere-system redis-ha-server-0
。。。
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  3m33s (x41 over 34m)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

  可以看到是pvc 原因

(2) 查看storageclass, pvc

[root@k8smaster02 ~]# kubectl get sc
NAME                           PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
course-nfs-storage (default)   fuseim.pri/ifs   Delete          Immediate           false                  40h
[root@k8smaster02 ~]# kubectl get pvc -n kubesphere-system
NAME                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS         AGE
data-redis-ha-server-0   Pending                                      course-nfs-storage   38m

  sc 是正常的,pvc 是pending 状态。 查看pvc 可以看到相关信息如下:

persistentvolume-controller waiting for a volume to be created, either by external provisioner XXX

(3) 我选择删除相关storage, 然后重新部署storageclass,重新搭建成功后在,自己新建一个pvc, 可以看到自动创建pv 即正常,也就是sc 正常就可以

(4) sc 正常后再次查看pods、pv、pvc 等信息,如下:

[root@k8smaster01 storageclass]# kubectl get pods -n kubesphere-system
NAME                                     READY   STATUS    RESTARTS   AGE
ks-apiserver-5866f585fc-6plkr            1/1     Running   30         144m
ks-apiserver-5866f585fc-jcwpq            1/1     Running   31         152m
ks-console-65f4d44d88-9qwwz              1/1     Running   0          160m
ks-console-65f4d44d88-hq5pd              1/1     Running   0          160m
ks-controller-manager-754947b99b-mvdmz   1/1     Running   0          152m
ks-controller-manager-754947b99b-zrmj7   1/1     Running   1          153m
ks-installer-85dcfff87d-4qp8v            1/1     Running   0          165m
redis-ha-haproxy-868fdbddd4-j2ttx        1/1     Running   0          163m
redis-ha-haproxy-868fdbddd4-qpvj7        1/1     Running   0          163m
redis-ha-haproxy-868fdbddd4-zffzq        1/1     Running   0          163m
redis-ha-server-0                        2/2     Running   0          163m
redis-ha-server-1                        2/2     Running   0          15m
redis-ha-server-2                        2/2     Running   0          14m
[root@k8smaster01 storageclass]# kubectl get sc -n kubesphere-system
NAME                           PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
course-nfs-storage (default)   fuseim.pri/ifs   Delete          Immediate           false                  20m
[root@k8smaster01 storageclass]# kubectl get pvc -n kubesphere-system
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
data-redis-ha-server-0   Bound    pvc-362fc90c-fc6d-4968-9179-4aba8e77e43a   2Gi        RWX            course-nfs-storage   92m
data-redis-ha-server-1   Bound    pvc-08948bd5-d8d8-4eee-aca8-0cb812b8ecbc   2Gi        RWO            course-nfs-storage   15m
data-redis-ha-server-2   Bound    pvc-b5adfa39-ba80-417a-ada6-96cfa6e2f360   2Gi        RWO            course-nfs-storage   15m
[root@k8smaster01 storageclass]# kubectl get pv -n kubesphere-system
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                             STORAGECLASS         REASON   AGE
pvc-08948bd5-d8d8-4eee-aca8-0cb812b8ecbc   2Gi        RWO            Delete           Bound    kubesphere-system/data-redis-ha-server-1                          course-nfs-storage            15m
pvc-1b4967ab-6fe7-4cb0-a68d-88570ea994b5   20Gi       RWO            Delete           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-1   course-nfs-storage            15m
pvc-26820c0c-5499-40bb-b00a-af2a788c17fc   20Gi       RWO            Delete           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0   course-nfs-storage            15m
pvc-362fc90c-fc6d-4968-9179-4aba8e77e43a   2Gi        RWX            Delete           Bound    kubesphere-system/data-redis-ha-server-0                          course-nfs-storage            15m
pvc-574ccbcc-e391-4d6f-b938-139813041e76   1Mi        RWX            Delete           Bound    default/test-pvc                                                  course-nfs-storage            15m
pvc-b5adfa39-ba80-417a-ada6-96cfa6e2f360   2Gi        RWO            Delete           Bound    kubesphere-system/data-redis-ha-server-2                          course-nfs-storage            15m
[root@k8smaster01 storageclass]# 

3. 登录

1. 查看svc

[root@k8smaster01 storageclass]# kubectl get svc -n kubesphere-system
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)              AGE
ks-apiserver            ClusterIP   10.1.132.230           80/TCP               162m
ks-console              NodePort    10.1.10.30             80:30880/TCP         162m
ks-controller-manager   ClusterIP   10.1.35.90             443/TCP              162m
redis                   ClusterIP   10.1.133.217           6379/TCP             164m
redis-ha                ClusterIP   None                   6379/TCP,26379/TCP   164m
redis-ha-announce-0     ClusterIP   10.1.83.53             6379/TCP,26379/TCP   164m
redis-ha-announce-1     ClusterIP   10.1.31.40             6379/TCP,26379/TCP   164m
redis-ha-announce-2     ClusterIP   10.1.204.175           6379/TCP,26379/TCP   164m

2. 访问集群任意节点的30880 端口即可登录,使用默认帐户和密码 (admin/P@88w0rd), 第一次登录后需要修改密码

3. 登录成功后页面如下

4. 选择一个集群进去后查看信息

(1) 集群节点可以查看节点信息

  双击节点可以查看单个节点监控信息

(2)可以直接进入控制台执行kubecel 相关命令:

(3) 左边可以查看系统相关组件,主要包括如下:

5.工作负载查看相关controller

(1) 查看deployments

   可以查看现有的相关组件,右边可以编辑,也可以新建。

(2) 双击进去可以查看详情,也可以修改副本数量(副本数量是其对应的pods 的副本数量)等操作。

6.. 选择一个pod可以查看其容器相关日志信息以及进入容器

(1) 菜单左边选择容器组

 (2) 双击后进入

 (3) 点击查看日志

 可以看到日志如下:

 (4) 进入容器

 进入后窗口如下:

(5) 查看容器监控信息

7. 查看服务service, 可以查看对应的外网映射。 也可以新建以及编辑现有的服务,点击菜单栏的服务:

 双击查看服务详情如下:

  还可以查看任务以及配置等信息,这里不做展示。

8. 通过界面新建一个nginx然后通过service 暴露出去

(1) 到工作负载新建一个部署 deployment, 对应信息如下

1》基本信息

 2》 容器组设置,我们只选择一个nginx

 3》存储卷设置和高级设置直接下一步

4》 查看生成的yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  labels:
    app: mynginx
  name: mynginx
  annotations:
    kubesphere.io/alias-name: mynginx
    kubesphere.io/description: 测试nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      labels:
        app: mynginx
    spec:
      containers:
        - name: container-1qils3
          imagePullPolicy: IfNotPresent
          image: nginx
      serviceAccount: default
      initContainers: []
      volumes: []
      imagePullSecrets: null
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%

5》 直接创建即可, 创建后查看信息如下:

(2) 新建一个service

1》基本信息如下

2》服务设置如下:

3》 高级设置设置类型为NodePort

4》查看yaml 内容如下:

apiVersion: v1
kind: Service
metadata:
  namespace: default
  labels:
    app: mynginx-svc
  name: mynginx-svc
  annotations:
    kubesphere.io/alias-name: mynginx-svc
    kubesphere.io/description: mynginx-svc
spec:
  sessionAffinity: None
  selector:
    app: mynginx
  ports:
    - name: port-http
      protocol: TCP
      targetPort: 80
      port: 80
  type: NodePort

5》 直接创建,然后查看列表如下:

6》查看创建成功之后的yaml 内容如下:

kind: Service
apiVersion: v1
metadata:
  name: mynginx-svc
  namespace: default
  labels:
    app: mynginx-svc
  annotations:
    kubesphere.io/alias-name: mynginx-svc
    kubesphere.io/creator: admin
    kubesphere.io/description: mynginx-svc
spec:
  ports:
    - name: port-http
      protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 31689
  selector:
    app: mynginx
  clusterIP: 10.1.222.4
  clusterIPs:
    - 10.1.222.4
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack

点击查看其端口规则如下:

(3) 测试: 从节点的31689 端口访问到nginx 即证明部署成功