侧边栏壁纸
  • 累计撰写 48 篇文章
  • 累计创建 7 个标签
  • 累计收到 2 条评论

目 录CONTENT

文章目录

飞腾/鲲鹏+麒麟V4国防版-arm64离线部署KubeSphere3.4.1

Administrator
2024-12-11 / 0 评论 / 0 点赞 / 8 阅读 / 0 字

随着国家信创政策推动和中美关系的变化,我国政企机构正加速推进信息技术应用创新体系的建设。特别是在涉及国家安全的重点领域,相关企业使用国产芯片和国产操作系统,已成为关键信息基础设施建设的刚性要求。由于国内环境特殊而且很多企业项目是部署在内网和专网的,所以离线部署就成为了常用的部署方式。

本文将演示基于arm64架构的麒麟V4国防版操作系统,离线部署K8s和KubeSphere。

环境涉及软件版本信息

  • 服务器芯片1: Kunpeng-920

  • 服务器芯片2: 飞腾d2000

  • 操作系统1:麒麟 V4国防版服务器端

  • 操作系统2:麒麟 V4国防版桌面端

  • Docker: 24.0.9

  • KubeSphere:v3.4.1

  • Kubernetes:v1.25.16

  • KubeKey: v3.1.5

1. 说明

本文只演示离线部署过程,离线制品和其他安装包可查看之前文章自己制作,也可添加作者微信【sd_zdhr】获取。若有其他操作系统需求如:麒麟信安、UOS、欧拉、麒麟V10、中标麒麟、龙蜥等也可联系作者。

2. 移除自带的podman

podman是麒麟系统自带的容器引擎,为避免后续与docker冲突,直接卸载。否则后续coredns/nodelocaldns也会受影响无法启动以及各种docker权限问题。所有节点执行

yum remove podman

3 将安装包拷贝至离线环境

将下载的 KubeKey 、制品 artifact 、脚本和导出的镜像通过 U 盘等介质拷贝至离线环境安装节点。

注意:

需要上传到/root 目录 麒麟V4国防版对磁盘目录管理比较严格,非root用户尽量在/home/用户下目录操作

这里保险起见,直接使用root用户操作,放置文件到/root目录下

4 安装Harbor私有仓库

由于arm版harbor官方不提供安装包,kk无法完成自动安装,需要我们手动安装

  • 安装docker和docker-compose

安装包-云盘:[docker](https://pan.quark.cn/s/7f22648bb59e "docker")

解压后执行其中的install.sh

  • 安装harbor

安装包: [harbor](https://pan.quark.cn/s/05e2f4ad31c4 "harbor")

解压后执行其中的install.sh

输入ip后等待安装完成

  • 创建harbor中的项目

vim create_project_harbor.sh
#!/usr/bin/env bash
   
# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
   
url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.local
user="admin"
passwd="Harbor12345"
   
harbor_projects=(
    kubesphereio
    kubesphere
)
   
for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
done

脚本授权后,执行脚本创建 ./create_project_harbor.sh

5 修改config-ks.yaml配置文件

修改相关节点和harbor信息

注意:

配置节点IP时,使用root账号,提前用ssh root@IP方式确定好IP和密码

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.200.7, internalAddress: "192.168.200.7", user: root, password: "123456", arch: arm64}
  roleGroups:
    etcd:
    - node1 # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - node1
      #  - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
    worker:
    - node1
    registry:
    - node1
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""

    port: 6443
  kubernetes:
    version: v1.25.16
    containerManager: docker
    clusterName: cluster.local
    # Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
    autoRenewCerts: true
    # maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
    maxPods: 210
  etcd:
    type: kubekey

    ## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
    # external:
    #   endpoints:
    #     - https://192.168.6.6:2379
    #   caFile: /pki/etcd/ca.crt
    #   certFile: /pki/etcd/etcd.crt
    #   keyFile: /pki/etcd/etcd.key
    dataDir: "/var/lib/etcd"
    heartbeatInterval: 250
    electionTimeout: 5000
    snapshotCount: 10000
    autoCompactionRetention: 8
    metrics: basic
    quotaBackendBytes: 2147483648
    maxRequestBytes: 1572864
    maxSnapshots: 5
    maxWals: 5
    logLevel: info
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    multusCNI:
      enabled: false
  storage:
    openebs:
      basePath: /var/openebs/local # base path of the local PV provisioner
  registry:
    type: harbor
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "dockerhub.kubekey.local":
        username: "admin"
        password: Harbor12345
        skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: false # Allow contacting registries over HTTP.
        certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
  addons: [] # You can install cloud-native addons (Chart or YAML) by using this field.
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 31688
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    # resources: {}
    jenkinsMemoryLim: 8Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 8Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

6 推送镜像至私有仓库

解压 ks3.4.1-images.tar.gz的压缩包,后执行./load-push.sh 将镜像上传至私有仓库

  • 推送脚本

vim ./load-push.sh
#!/bin/bash
#
FILES=$(find . -type f \( -iname "
*.tar"  -o -iname "*
.tar.gz"  \) -printf '%P\n' | grep -E ".tar$|.tar.gz$")
Harbor="dockerhub.kubekey.local"
ProjectName="kubesphereio"
docker login -u admin -p Harbor12345 ${Harbor}
echo "--------[Login Harbor succeed]--------"
# 遍历所有 ".tar" 或 ".tar.gz" 文件,逐个加载 Docker 镜像
for file in ${FILES}
do
    echo "--------[Loading Docker image from $file]--------"
    docker load -i "$file" > loadimages
    IMAGE=`cat loadimages | grep 'Loaded image:' | awk '{print $3}' | head -1`
    IMAGE2=`cat loadimages | grep 'Loaded image:' | awk '{print $3}' | head -1|awk -F / '{print $3}'`
    echo "--------[$IMAGE]--------"
    docker tag $IMAGE $Harbor/$ProjectName/$IMAGE2
    docker push $Harbor/$ProjectName/$IMAGE2
done
echo "--------[All Docker images push successfully]--------"
./load-push.sh

7 安装k8s和KubeSphere

此处不再增加参数 -a ks3.4-artifact.tar.gz,因为再上一步时,已经将artifact制品解压提取。此处再加-a

./kk create cluster -f config-ks.yaml

此处需要执行两遍命令,第一遍目的是解压制品,由于制品中并非完整制品。第一次执行时会报错。第二次去掉参数-a ks3.4-artifact.tar.gz

  • 第一次执行

./kk create cluster -f config-sample.yaml -a ks3.4-artifact.tar.gz
  • 第二次执行

./kk create cluster -f config-sample.yaml

等待大概十几分钟,看到成功消息

8 验证

基础组件运行正常

同样适用于服务端镜像:Kylin-4.0.2-server-sp4-419live-CY_01-20231101.J1-ARM64.iso

9 总结

本文实战记录基于鲲鹏/飞腾芯片的麒麟 V4 国防版操作系统,离线部署 K8s 及 KubeSphere 的全过程,适合在无外网环境下快速搭建 K8s 集群及 KubeSphere 平台。

扩展节点

注意:

  • node1节点的IP不要变

  • 如果待新增节点机器有docker环境,尽量保持20.0.18之后的版本

    • 如果有之前的版本,尽量卸载

    • 不好卸载,则vim /etc/docker/daemon.json增加以下内容后systemctl daemon-reload && systemctl restart docker

{
  "experimental": true
}

修改创建集群配置文件config-ks.yaml,增加对应节点机器的IP和信息

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.200.7, internalAddress: "192.168.200.7", user: root, password: "123456", arch: arm64}
  - {name: node2, address: 192.168.200.8, internalAddress: "192.168.200.8", user: root, password: "123456", arch: arm64}
  - {name: node3, address: 192.168.200.9, internalAddress: "192.168.200.9", user: root, password: "123456", arch: arm64}
  roleGroups:
    etcd:
    - node1 # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - node1
      #  - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
    worker:
    - node1
    - node2
    - node3
    registry:
    - node1
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""

    port: 6443
    
   ---

执行添加节点命令

./kk add nodes -f config-ks.yaml

该命令会根据配置文件,将其中所有未进入集群的节点全部扩展。

  • 如果执行失败

提示缺少系统依赖,则按照如果执行系统初始化,使用离线制品进行系统初始化

./kk create cluster -f config-ks.yaml -a ks3.4-artifact.tar.gz --with-packages

该步骤执行结束会报错:failed: open /root/kubekey/images/index.json: no such file or directory,不用理会,目的只是初始化。

之后再执行

./kk add nodes -f config-ks.yaml

0

评论区