侧边栏壁纸
  • 累计撰写 48 篇文章
  • 累计创建 7 个标签
  • 累计收到 2 条评论

目 录CONTENT

文章目录

【k8s】鲲鹏(arm64)+欧拉离线部署K8S+KubeSphere+Harbor

Administrator
2025-04-27 / 0 评论 / 0 点赞 / 7 阅读 / 0 字

环境涉及软件版本信息

  • 服务器芯片: Kunpeng-920

  • 操作系统:22.03 LTS

  • Docker: 24.0.9

  • Harbor: v2.7.1

  • Kubernetes:v1.23.17

  • KubeSphere:v3.4.1

  • KubeKey: v3.1.5

服务器基本信息

[root@ecs-437f ks]# uname -a
Linux node1 5.10.0-60.139.0.166.oe2203.aarch64 #1 SMP Thu May 30 05:18:35 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

[root@ecs-437f ks]# cat /etc/os-release 
NAME="openEuler"
VERSION="22.03 LTS"
ID="openEuler"
VERSION_ID="22.03"
PRETTY_NAME="openEuler 22.03 LTS"
ANSI_COLOR="0;31"

1 说明

本文只演示离线部署过程,离线制品和其他安装包可查看之前文章自己制作,也可添加作者微信【sd_zdhr】获取。

2 移除麒麟系统自带的podman

podman是麒麟系统自带的容器引擎,容易与docker冲突。不卸载的话后续coredns/nodelocaldns会受影响无法启动以及各种docker权限问题。这里直接卸载,以便后续使用docker,所有节点执行

yum remove podman

3 将安装包拷贝至离线环境

将下载的 KubeKey 、制品 artifact 、脚本和导出的镜像通过 U 盘等介质拷贝至离线环境安装节点。

4 安装k8s依赖包

所有K8S节点执行,上传k8s-init-KylinV10.tar.gz解压后执行install.sh

5 安装Harbor私有仓库

由于arm版harbor官方不提供安装包,kk无法完成自动安装,需要我们手动安装

  • 安装docker和docker-compose

安装包-百度云:[docker](https://pan.baidu.com/s/1NUYFg3ayp1JHhNdUSY25wQ?pwd=9tek "docker")

解压后执行其中的install.sh

  • 安装harbor

安装包-百度云: [harbor](https://pan.baidu.com/s/1fL69nDOG5j92bEk84UQk7g?pwd=uian "harbor")

解压后执行其中的install.sh

输入ip后等待安装完成

  • 创建harbor中的项目

vim create_project_harbor.sh 
#!/usr/bin/env bash
   
# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
   
url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.local
user="admin"
passwd="Harbor12345"
   
harbor_projects=(
    kubesphereio
    kubesphere
)
   
for project in "${harbor_projects[@]}"; do
    echo "creating $project"
    curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
done

脚本授权后,执行脚本创建 ./create_project_harbor.sh

对于多节点集群,除harbor所在机器,其他机器需要配置 /etc/docker/daemon.json

添加以下信息:

"insecure-registries": ["dockerhub.kubekey.local"]

添加完成后执行以下命令

systemctl daemon-reload
systemctl restart docker.service

6 修改config-sample.yaml配置文件

修改相关节点和harbor信息

  • 必须指定 registry 仓库部署节点(用于 KubeKey 部署自建 Harbor 仓库)。

  • registry 里指定 不再指定type 类型为 harbor,默认安装 docker registry,harbor官方不支持arm。需要安装的话可以自行安装或者部署完ks后(卸载docker registry)再安装

纯k8s

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.200.7, internalAddress: "192.168.200.7", user: root, password: "123456",arch: arm64}
  roleGroups:
    etcd:
    - node1 # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - node1
      #  - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
    worker:
    - node1
    registry:
    - node1
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""      
    port: 6443
  system:
    ntpServers:
      - node1 # 所有节点同步node1时间.
    timezone: "Asia/Shanghai"

  kubernetes:
    version: v1.25.16
    containerManager: docker
    clusterName: cluster.local
    # Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
    autoRenewCerts: true
    # maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
    maxPods: 210
  etcd:
    type: kubekey  
    ## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
    # external:
    #   endpoints:
    #     - https://192.168.6.6:2379
    #   caFile: /pki/etcd/ca.crt
    #   certFile: /pki/etcd/etcd.crt
    #   keyFile: /pki/etcd/etcd.key
    dataDir: "/var/lib/etcd"
    heartbeatInterval: 250
    electionTimeout: 5000
    snapshotCount: 10000
    autoCompactionRetention: 8
    metrics: basic
    quotaBackendBytes: 2147483648 
    maxRequestBytes: 1572864
    maxSnapshots: 5
    maxWals: 5
    logLevel: info
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    multusCNI:
      enabled: false
  storage:
    openebs:
      basePath: /var/openebs/local # base path of the local PV provisioner
  registry:
    type: harbor
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "dockerhub.kubekey.local":
        username: "admin"
        password: Harbor12345
        skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: false # Allow contacting registries over HTTP.
        certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
  addons: [] # You can install cloud-native addons (Chart or YAML) by using this field.

带kubesphere

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.200.7, internalAddress: "192.168.200.7", user: root, password: "123456"}
  roleGroups:
    etcd:
    - node1 # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - node1
      #  - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
    worker:
    - node1
    registry:
    - node1
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""      
    port: 6443
  system:
    ntpServers:
      - node1 # 所有节点同步node1时间.
    timezone: "Asia/Shanghai"

  kubernetes:
    version: v1.25.16
    containerManager: docker
    clusterName: cluster.local
    # Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
    autoRenewCerts: true
    # maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
    maxPods: 210
  etcd:
    type: kubekey  
    ## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
    # external:
    #   endpoints:
    #     - https://192.168.6.6:2379
    #   caFile: /pki/etcd/ca.crt
    #   certFile: /pki/etcd/etcd.crt
    #   keyFile: /pki/etcd/etcd.key
    dataDir: "/var/lib/etcd"
    heartbeatInterval: 250
    electionTimeout: 5000
    snapshotCount: 10000
    autoCompactionRetention: 8
    metrics: basic
    quotaBackendBytes: 2147483648 
    maxRequestBytes: 1572864
    maxSnapshots: 5
    maxWals: 5
    logLevel: info
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    multusCNI:
      enabled: false
  storage:
    openebs:
      basePath: /var/openebs/local # base path of the local PV provisioner
  registry:
    type: harbor
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "dockerhub.kubekey.local":
        username: "admin"
        password: Harbor12345
        skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: false # Allow contacting registries over HTTP.
        certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
  addons: [] # You can install cloud-native addons (Chart or YAML) by using this field.

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    # resources: {}
    jenkinsMemoryLim: 8Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 8Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

7 推送镜像

解压 ks3.4.1-images.tar.gz的镜像压缩包,后执行./load-push.sh 将镜像上传至私有仓库

  • 推送脚本

vim ./load-push.sh
#!/bin/bash
#
FILES=$(find . -type f \( -iname "*.tar"  -o -iname "*.tar.gz"  \) -printf '%P\n' | grep -E ".tar$|.tar.gz$")

Harbor="dockerhub.kubekey.local"
ProjectName="kubesphereio"

docker login -u admin -p Harbor12345 ${Harbor}
echo "--------[Login Harbor succeed]--------"

# 遍历所有 ".tar" 或 ".tar.gz" 文件,逐个加载 Docker 镜像
for file in ${FILES}
do
    echo "--------[Loading Docker image from $file]--------"
    docker load -i "$file" > loadimages
    IMAGE=`cat loadimages | grep 'Loaded image:' | awk '{print $3}' | head -1`
    IMAGE2=`cat loadimages | grep 'Loaded image:' | awk '{print $3}' | head -1|awk -F / '{print $3}'`
    echo "--------[$IMAGE]--------"
    docker tag $IMAGE $Harbor/$ProjectName/$IMAGE2
    docker push $Harbor/$ProjectName/$IMAGE2

done
echo "--------[All Docker images push successfully]--------"
./load-push.sh

8 安装k8s

此处需要执行两遍命令,第一遍目的是解压制品,由于制品中并非完整制品。第一次执行时会报错。第二次去掉参数-a ks3.4-artifact.tar.gz

  • 第一次执行

./kk create cluster -f config-sample.yaml -a ks3.4-artifact.tar.gz
  • 第二次执行

./kk create cluster -f config-sample.yaml

等待大概十几分钟,看到成功消息

clusterconfiguration.installer.kubesphere.io/ks-installer created
13:43:32 CST success: [node1]
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://172.27.36.4:31688
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2025-03-13 13:50:09
#####################################################
13:50:11 CST success: [node1]
13:50:11 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

        kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

期间可以通过kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f 查看安装进度

9 验证

基础组件运行正常

10 注意事项

  • 网卡需要配置网关

  • /etc/resolv.conf 需要配置域名,可以配置

nameserver 114.114.114.114
  • 每台节点需要配置/etc/docker/damon.json

{
  "log-opts": {
    "max-size": "5m",
    "max-file":"3"
  },
  "exec-opts": ["native.cgroupdriver=systemd"],
  "insecure-registries": ["dockerhub.kubekey.local"]
}

执行命令:

systemctl daemon-reload
systemctl restart docker.service
  • harbor机器添加harbor服务

[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service

[Service]
Type=simple
ExecStart=/usr/local/bin/docker-compose -f /opt/harbor/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /opt/harbor/docker-compose.yml down
Restart=on-failure
[Install]
WantedBy=multi-user.target 

添加到开机自启动

systemctl enable harbor

11 扩展节点

安装k8s依赖

上传k8s-init-openEuler.tar.gz解压后执行install.sh

注意:

  • 如果待新增节点机器有docker环境,尽量保持20.0.18之后的版本

    • 如果有之前的版本,尽量卸载

    • 不好卸载,则vim /etc/docker/daemon.json增加以下内容后systemctl daemon-reload && systemctl restart docker

{
  "experimental": true
}

修改创建集群配置文件config-ks.yaml,增加对应节点机器的IP和信息

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.200.7, internalAddress: "192.168.200.7", user: root, password: "123456", arch: arm64}
  - {name: node1, address: 192.168.200.8, internalAddress: "192.168.200.8", user: root, password: "123456", arch: arm64}
  - {name: node2, address: 192.168.200.9, internalAddress: "192.168.200.9", user: root, password: "123456", arch: arm64}
  roleGroups:
    etcd:
    - master # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - master
    worker:
    - node1
    - node2
    registry:
    - master
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""

    port: 6443
    
   ---

执行添加节点命令

./kk add nodes -f config-ks.yaml

该命令会根据配置文件,将其中所有未进入集群的节点全部扩展。

  • 如果执行失败

提示缺少系统依赖,则按照如果执行系统初始化,使用离线制品进行系统初始化

./kk create cluster -f config-ks.yaml -a ks3.4-artifact.tar.gz --with-packages

该步骤执行结束会报错:failed: open /root/kubekey/images/index.json: no such file or directory,不用理会,目的只是初始化。

之后再执行

./kk add nodes -f config-ks.yaml

0

评论区