Kubernetes(k8s)环境搭建

jupiter
2025-03-20 / 0 评论 / 12 阅读 / 正在检测是否收录...

Kubernetes(k8s)环境搭建

1.机器准备

实验环境用了3台centerOS的虚拟机

节点类型IP主机名操作系统
mater192.168.1.16node1centerOS
slaver1192.168.1.17node2centerOS
slaver2172.24.87.84node3centerOS

2.准备工作

三台机器都需要进行处理

2.1关闭防火墙和禁用 selinux

## 禁用selinux,关闭内核安全机制
sestatus
setenforce 0 
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

## 关闭防火墙,并禁止自启动
systemctl stop firewalld 
systemctl disable firewalld 
systemctl status firewalld

2.2 关闭交换分区

# 临时关闭
swapoff -a

# 永久关闭
sed -i '/swap/s/^/#/' /etc/fstab

2.3 服务器内核优化

# 这种镜像信息可以通过配置内核参数的方式来消除
cat >> /etc/sysctl.conf << EOF
# 启用ipv6桥接转发
net.bridge.bridge-nf-call-ip6tables = 1
# 启用ipv4桥接转发
net.bridge.bridge-nf-call-iptables = 1
# 开启路由转发功能
net.ipv4.ip_forward = 1
# 禁用swap分区
vm.swappiness = 0
EOF

## # 加载 overlay 内核模块
modprobe overlay

# 往内核中加载 br_netfilter模块
modprobe br_netfilter

# 加载文件内容
sysctl -p
  • modprobe 动态加载的模块重启会失效,因此需要执行如下配置
cat << EOF >>/etc/sysconfig/modules/iptables.modules
modprobe -- overlay
modprobe -- br_netfilter
EOF
chmod 755 /etc/sysconfig/modules/iptables.modules   #设置权限
sh /etc/sysconfig/modules/iptables.modules            #临时生效

2.4 各节点时间同步

## 安装同步时间插件
yum -y install ntpdate

## 同步阿里云的时间
ntpdate ntp.aliyun.com

3. Containerd 环境部署

3.1 安装Containerd(手动)

# 安装containerd
wget https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz
tar xzvf containerd-1.7.27-linux-amd64.tar.gz
cp -f bin/* /usr/local/bin/

# 安装runc
wget https://github.com/opencontainers/runc/releases/download/v1.2.6/runc.amd64
chmod +x runc.amd64
mv runc.amd64 /usr/local/bin/runc

# 安装cni plugins
wget https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
rm -fr /opt/cni/bin
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

# 安装 nerdctl
wget https://github.com/containerd/nerdctl/releases/download/v2.0.3/nerdctl-2.0.3-linux-amd64.tar.gz
tar Cxzvf /usr/local/bin nerdctl-2.0.3-linux-amd64.tar.gz

# 安装crictl
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-amd64.tar.gz
tar Cxzvf /usr/local/bin crictl-v1.32.0-linux-amd64.tar.gz

# 配置 crictl 配置文件
cat << EOF >> /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10 
debug: false
EOF
# 初始化containerd的配置文件
mkdir -p /etc/containerd/
containerd config default > /etc/containerd/config.toml

# 修改 /etc/containerd/config.toml 文件 
# 配置镜像加速 很关键 要不后面会报错
sed -i 's#registry.k8s.io/pause:3.8#registry.aliyuncs.com/google_containers/pause:3.8#g' /etc/containerd/config.toml
# 配置containerd到systemctl管理
cat <<EOF > /lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
# 启动containerd服务
systemctl daemon-reload
systemctl restart containerd
systemctl enable containerd

3.2 一键安装脚本

#!/bin/bash
set -e

ContainerdVersion=$1
ContainerdVersion=${ContainerdVersion:-1.6.6}

RuncVersion=$2
RuncVersion=${RuncVersion:-1.1.3}

CniVersion=$3
CniVersion=${CniVersion:-1.1.1}

NerdctlVersion=$4
NerdctlVersion=${NerdctlVersion:-0.21.0}

CrictlVersion=$5
CrictlVersion=${CrictlVersion:-1.24.2}

echo "--------------install containerd--------------"
wget https://github.com/containerd/containerd/releases/download/v${ContainerdVersion}/containerd-${ContainerdVersion}-linux-amd64.tar.gz
tar Cxzvf /usr/local containerd-${ContainerdVersion}-linux-amd64.tar.gz

echo "--------------install containerd service--------------"
wget https://raw.githubusercontent.com/containerd/containerd/681aaf68b7dcbe08a51c3372cbb8f813fb4466e0/containerd.service
mv containerd.service /lib/systemd/system/

mkdir -p /etc/containerd/
containerd config default > /etc/containerd/config.toml

echo "--------------install runc--------------"
wget https://github.com/opencontainers/runc/releases/download/v${RuncVersion}/runc.amd64
chmod +x runc.amd64
mv runc.amd64 /usr/local/bin/runc

echo "--------------install cni plugins--------------"
wget https://github.com/containernetworking/plugins/releases/download/v${CniVersion}/cni-plugins-linux-amd64-v${CniVersion}.tgz
rm -fr /opt/cni/bin
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v${CniVersion}.tgz

echo "--------------install nerdctl--------------"
wget https://github.com/containerd/nerdctl/releases/download/v${NerdctlVersion}/nerdctl-${NerdctlVersion}-linux-amd64.tar.gz
tar Cxzvf /usr/local/bin nerdctl-${NerdctlVersion}-linux-amd64.tar.gz

echo "--------------install crictl--------------"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v${CrictlVersion}/crictl-v${CrictlVersion}-linux-amd64.tar.gz
tar Cxzvf /usr/local/bin crictl-v${CrictlVersion}-linux-amd64.tar.gz

# 启动containerd服务
systemctl daemon-reload
systemctl restart contaienrd

4.部署Kubernetes集群

4.1 配置 kubernetes 的 yum 源(三台机器均需执行)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.2 安装Kubernetes基础服务及工具(三台机器均需执行)

  • kubeadm:用来初始化集群的指令。
  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。
  • kubectl:用来与集群通信的命令行工具。
## 安装所需 Kubernetes 
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
systemctl start kubelet 
systemctl enable kubelet
systemctl status kubelet 

4.3 master节点生成初始化配置文件(master节点执行)

  • Kubeadm提供了很多配置项,kubeadm配置在kubernetes集群中是存储在ConfigMap中的,也可将这些配置写入配置文件,方便管理复杂的配置项。kubeadm配置内容是通过kubeadm config命令写入配置文件的
  • kubeadm config view:查看当前集群中的配置值
  • kubeadm config print join-defaults:输出kubeadm join默认参数文件的内容
  • kubeadm config images list:列出所需的镜像列表
  • kubeadm config images pull:拉取镜像到本地
  • kubeadm config upload from-flags:由配置参数生成ConfigMap
# 生成初始化配置文件,并输出到当前目录
kubeadm config print init-defaults > init-config.yaml
# 执行上面的命令可能会出现类似这个提示,不用管,接着往下执行即可:W0615 08:50:40.154637   10202 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

# 编辑配置文件,以下有需要修改部分
$ vi init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.16   # 修改此处为你 master 节点 IP 地址,
  bindPort: 6443    # 默认端口号即可
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 修改默认地址为国内地址,国外的地址无法访问
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12 # 默认网段即可,service资源的网段,集群内部的网络
scheduler: {}

4.4 master节点拉取所需镜像(master节点执行)

# 根据指定 init-config.yaml 文件,查看初始化需要的镜像
kubeadm config images list --config=init-config.yaml

## 拉取镜像
kubeadm config images pull --config=init-config.yaml

## 查看拉取的镜像
crictl images

4.5 master节点初始化和网络配置(master节点执行)

( kubeadm init 初始化配置参数如下,仅做了解即可)

  • --apiserver-advertise-address(string) API服务器所公布的其正在监听的IP地址
  • --apiserver-bind-port(int32) API服务器绑定的端口,默认6443
  • --apiserver-cert-extra-sans(stringSlice) 用于API Server服务证书的可选附加主题备用名称,可以是IP和DNS名称
  • --certificate-key(string) 用于加密kubeadm-certs Secret中的控制平面证书的密钥
  • --control-plane-endpoint(string) 为控制平面指定一个稳定的IP地址或者DNS名称
  • --image-repository(string) 选择用于拉取控制平面镜像的容器仓库,默认k8s.gcr.io
  • --kubernetes-version(string) 为控制平面选择一个特定的k8s版本,默认stable-1
  • --cri-socket(string) 指定要连接的CRI套接字的路径
  • --node-name(string) 指定节点的名称
  • --pod-network-cidr(string) 知名Pod网络可以使用的IP地址段,如果设置了这个参数,控制平面将会为每一个节点自动分配CIDRS
  • --service-cidr(string) 为服务的虚拟IP另外指定IP地址段,默认 10.96.0.0/12
  • --service-dns-domain(string) 为服务另外指定域名,默认 cluster.local
  • --token(string) 用于建立控制平面节点和工作节点之间的双向通信
  • --token-ttl(duration) 令牌被自动删除之前的持续时间,设置为0则永不过期
  • --upload-certs 将控制平面证书上传到kubeadm-certs Secret

(kubeadm通过初始化安装是不包括网络插件的,也就是说初始化之后不具备相关网络功能的,比如k8s-master节点上查看信息都是"Not Ready"状态、Pod的CoreDNS无法提供服务等

4.5.0 若初始化失败执行:

    systemctl stop kubelet 
    kubeadm reset
rm -rf $HOME/.kube
rm -rf /etc/kubernetes/
rm -rf /var/lib/etcd/

4.5.1 使用 kubeadm 在 master 节点初始化k8s(master节点执行)

  • kubeadm 安装 k8s,这个方式安装的集群会把所有组件安装好,也就免去了需要手动安装 etcd 组件的操作
## 初始化 k8s
## 1)修改 kubernetes-version 为你自己的版本号;
## 2)修改 apiserver-advertise-address 为 master 节点的 IP
kubeadm init --kubernetes-version=1.28.0 \
--apiserver-advertise-address=192.168.1.16 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/containerd/containerd.sock

4.5.2 初始化 k8s 成功的日志输出(master节点展示)

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.16:6443 --token 9uln6k.edk5srichppjq6k6 \
        --discovery-token-ca-cert-hash sha256:1a4c79509438b84756a5e4e66ee6914835f1235d2a6b4752b2f625366142c942

4.5.3 master节点复制k8s认证文件到用户的home目录(master节点执行)

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

4.5.4 启动 kubelet 并设置开机自启(master节点执行)

systemctl enable kubelet 
systemctl start kubelet

4.6 node 节点加入集群(两台从节点执行)

  • 直接把k8s-master节点初始化之后的最后回显的token复制粘贴到node节点回车即可,无须做任何配置
kubeadm join 192.168.1.16:6443 --token 9uln6k.edk5srichppjq6k6 \
        --discovery-token-ca-cert-hash sha256:1a4c79509438b84756a5e4e66ee6914835f1235d2a6b4752b2f625366142c942
        
# 如果加入集群的命令找不到了可以在master节点重新生成一个
kubeadm token create --print-join-command

4.7 在master节点查看各个节点的状态(master节点执行)

  • 前面已经提到了,在初始化 k8s-master 时并没有网络相关的配置,所以无法跟node节点
  • 通信,因此状态都是"Not Ready"。但是通过kubeadm join加入的node节点已经在k8s-master上可以看到。
  • 同理,目前 coredns 模块一直处于 Pending 也是正常状态。
[root@node1 data]#  kubectl get nodes
\NAME    STATUS     ROLES           AGE     VERSION
node1   NotReady   control-plane   6m2s    v1.28.2
node2   NotReady   <none>          2m20s   v1.28.2
node3   NotReady   <none>          116s    v1.28.2
[root@node1 data]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                            READY   STATUS              RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-6554b8b87f-chcdt        0/1     Pending             0          6m7s    <none>          <none>   <none>           <none>
kube-system   coredns-6554b8b87f-l9kpk        0/1     Pending             0          6m7s    <none>          <none>   <none>           <none>
kube-system   etcd-node1                      1/1     Running             0          6m21s   192.168.1.16   node1    <none>           <none>
kube-system   kube-apiserver-node1            1/1     Running             0          6m21s   192.168.1.16   node1    <none>           <none>
kube-system   kube-controller-manager-node1   1/1     Running             0          6m21s   192.168.1.16   node1    <none>           <none>
kube-system   kube-proxy-8pnkb                1/1     Running             0          6m7s    192.168.1.16   node1    <none>           <none>
kube-system   kube-proxy-hcnq2                0/1     ContainerCreating   0          2m15s   172.24.87.84    node3    <none>           <none>
kube-system   kube-proxy-r5pvx                1/1     Running             0          2m39s   192.168.1.17   node2    <none>           <none>
kube-system   kube-scheduler-node1            1/1     Running             0          6m21s   192.168.1.16   node1    <none>           <none>

5 部署 flannel 网络插件

5.1 下载 flannel 的部署yaml文件

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
  • 文件备份
apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "EnableNFTables": false,
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: registry.cn-chengdu.aliyuncs.com/xcce/flannel:v0.26.0
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: registry.cn-chengdu.aliyuncs.com/xcce/flannel-cni-plugin:v1.5.1-flannel2
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image:  
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock

5.2 提前下载镜像

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/flannel-io/flannel:v0.26.5
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/flannel-io/flannel:v0.26.5  ghcr.io/flannel-io/flannel:v0.26.5

ctr images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1
ctr images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1  ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1

5.3 部署网络插件

kubectl apply -f kube-flannel.yaml

6 从节点支持 kubectl 命令(两台从节点执行)

6.1 此时从节点执行 kubectl 命令会报错:(两台从节点执行)

-   E0709 15:29:19.693750 97386 memcache.go:265\] couldn't get current server API group list: Get "[http://localhost:8080/api?timeout=32s](http://localhost:8080/api?timeout=32s)": dial tcp \[::1\]:8080: connect: connection refused
-   The connection to the server localhost:8080 was refused - did you specify the right host or port?

6.2 分析结果以及解决方法:(两台从节点执行)

  • 原因是 kubectl 命令需要使用 kubernetes-admin 来运行
  • 将主节点中的 /etc/kubernetes/admin.conf 文件拷贝到从节点相同目录下,然后配置环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
## 立即生效
source ~/.bash_profile

7. 查看各节点和组件状态

[root@node1 data]#  kubectl get nodes
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   35m   v1.28.2
node2   Ready    <none>          31m   v1.28.2
node3   Ready    <none>          31m   v1.28.2
[root@node1 data]#  kubectl get pods --all-namespaces -o wide
NAMESPACE      NAME                            READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-5ggnx           1/1     Running   0          9m26s   192.168.1.16   node1   <none>           <none>
kube-flannel   kube-flannel-ds-6zln6           1/1     Running   0          9m26s   192.168.1.17   node2   <none>           <none>
kube-flannel   kube-flannel-ds-hqjpx           1/1     Running   0          9m26s   192.168.1.18   node3   <none>           <none>
kube-system    coredns-6554b8b87f-vvn2d        1/1     Running   0          35m     10.244.0.3     node1   <none>           <none>
kube-system    coredns-6554b8b87f-wklf8        1/1     Running   0          35m     10.244.0.2     node1   <none>           <none>
kube-system    etcd-node1                      1/1     Running   0          35m     192.168.1.16   node1   <none>           <none>
kube-system    kube-apiserver-node1            1/1     Running   0          35m     192.168.1.16   node1   <none>           <none>
kube-system    kube-controller-manager-node1   1/1     Running   5          35m     192.168.1.16   node1   <none>           <none>
kube-system    kube-proxy-b4jpx                1/1     Running   0          35m     192.168.1.16   node1   <none>           <none>
kube-system    kube-proxy-g7cw2                1/1     Running   0          31m     192.168.1.18   node3   <none>           <none>
kube-system    kube-proxy-sgmcb                1/1     Running   0          31m     192.168.1.17   node2   <none>           <none>
kube-system    kube-scheduler-node1            1/1     Running   5          35m     192.168.1.16   node1   <none>           <none>
[root@node1 data]# kubectl get pods -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
coredns-6554b8b87f-vvn2d        1/1     Running   0          37m
coredns-6554b8b87f-wklf8        1/1     Running   0          37m
etcd-node1                      1/1     Running   0          37m
kube-apiserver-node1            1/1     Running   0          37m
kube-controller-manager-node1   1/1     Running   5          37m
kube-proxy-b4jpx                1/1     Running   0          37m
kube-proxy-g7cw2                1/1     Running   0          33m
kube-proxy-sgmcb                1/1     Running   0          33m
kube-scheduler-node1            1/1     Running   5          37m

参考资料

  1. 最新 Kubernetes 集群部署 + flannel 网络插件(保姆级教程,最新 K8S 版本) - 技术栈
  2. 安装Containerd | kubernetes-notes
  3. Containerd的两种安装方式-腾讯云开发者社区-腾讯云
  4. Containerd ctr、crictl客户端命令介绍_containerd crictl-CSDN博客
  5. GitHub - flannel-io/flannel: flannel is a network fabric for containers, designed for Kubernetes
  6. modprobe 重启后失效,设置永久有效_服务器_闹玩儿扣眼珠子-K8S/Kubernetes
1

评论 (0)

打卡
取消