前言

目的:以二进制方式部署实验用途的kubernetes集群

说明:实验性记录,无高可用,etcd亦单机版无证书模式,单主两节点

系统:CentOS 7.2(阿里云ECS,无安全组、iptables,禁用selinux、swap,时间同步,2C 4G)

服务器规划(写入到各节点hosts):

主机名IP组件
k8s-master172.17.89.23apiserver,controller-manager,scheduler,etcd
k8s-node7172.17.89.17kubelet,kube-proxy,containerd
k8s-node8172.17.89.7kubelet,kube-proxy,containerd

网络规划:

  • Service: 10.0.0.0/24

  • Pod: 10.244.0.0/16

软件版本:

  • Kubernetes 1.18
  • containerd 1.4
  • etcd 3.3
  • cfssl 1.5
  • flannel 0.13
  • cni-plugins 0.9

增加内核参数,将桥接的IPv4流量传递到iptables的链(所有节点):

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

以下操作,代码区域 $ 符号表示有输出或换行(无它时表示纯命令),#表示注释,一般情况下以root身份操作。

在Master操作

1. 安装etcd

$ yum install etcd
$ systemctl start etcd
$ systemctl enable etcd

$ etcd --version
etcd Version: 3.3.11
Git SHA: 2cf9e51
Go Version: go1.10.3
Go OS/Arch: linux/amd64

$ etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379
cluster is healthy

是的,单机版etcd,监听127.0.0.1:2379,无证书认证。

2. 安装cfssl证书生成工具

wget -c https://static.saintic.com/download/k8s/cfssl-R1.2.tar.gz
tar zxf cfssl-R1.2.tar.gz
chmod +x cfssl cfssljson cfssl-certinfo
mv cfssl cfssljson cfssl-certinfo /usr/bin/

3. 生成k8s证书

# 自签证书颁发机构(CA)
$ mkdir -p ~/TLS/k8s && cd ~/TLS/k8s
$ cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

$ cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[INFO] generating a new CA key and certificate from CSR
[INFO] generate received request
[INFO] received CSR
[INFO] generating key: rsa-2048
[INFO] encoded CSR
[INFO] signed certificate with serial number 218586274008070489012647351630930893980576584553

$ ls *pem
ca-key.pem  ca.pem

# 使用自签CA签发kube-apiserver HTTPS证书
# hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以写几个预留的IP
# 注意也要加上Service网络范围的第一个IP,比如本示例是10.0.0.1
$ cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "172.17.89.23",
      "172.17.89.17",
      "172.17.89.7",
      "172.17.89.10",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
[INFO] generate received request
[INFO] received CSR
[INFO] generating key: rsa-2048
[INFO] encoded CSR
[INFO] signed certificate with serial number 276237244257869550803211619152022588342922126725

$ ls server*pem
server-key.pem  server.pem

4. 下载安装kubernetes二进制可执行程序

cd /usr/local/src
wget -c https://static.saintic.com/download/k8s/kubernetes-server-linux-amd64.tar.gz
tar zxf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
mkdir -p /opt/k8s/{bin,cfg,ssl,logs}
cp kube-apiserver kube-scheduler kube-controller-manager /opt/k8s/bin
cp kubectl /usr/bin/

5. 部署kube-apiserver组件

cat > /opt/k8s/cfg/kube-apiserver.conf << 'EOF'
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--etcd-servers=http://localhost:2379 \
--bind-address=172.17.89.23 \
--secure-port=6443 \
--advertise-address=172.17.89.23 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/k8s/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/k8s/ssl/server.pem \
--kubelet-client-key=/opt/k8s/ssl/server-key.pem \
--tls-cert-file=/opt/k8s/ssl/server.pem \
--tls-private-key-file=/opt/k8s/ssl/server-key.pem \
--client-ca-file=/opt/k8s/ssl/ca.pem \
--service-account-key-file=/opt/k8s/ssl/ca-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/k8s/logs/k8s-audit.log"
EOF

配置说明:

–etcd-servers:etcd地址

–bind-address:监听地址

–advertise-address:集群通告地址

–service-cluster-ip-range:Service虚拟IP地址段

–enable-admission-plugins:准入控制模块

–authorization-mode:认证授权,启用RBAC授权和节点自管理

–enable-bootstrap-token-auth:启用TLS bootstrap机制

–token-auth-file:bootstrap token文件

复制证书

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/k8s/ssl/

启用 TLS Bootstrapping 机制

# 生成token
$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
cf2cdbdd07d06094e598d83d0b89006b

# 格式:token,用户名,UID,用户组
$ cat > /opt/k8s/cfg/token.csv << EOF
cf2cdbdd07d06094e598d83d0b89006b,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

systemd管理apiserver

cat > /lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-apiserver.conf
ExecStart=/opt/k8s/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

6. 部署kube-controller-manager组件

cat > /opt/k8s/cfg/kube-controller-manager.conf << 'EOF'
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/k8s/ssl/ca.pem \
--cluster-signing-key-file=/opt/k8s/ssl/ca-key.pem  \
--root-ca-file=/opt/k8s/ssl/ca.pem \
--service-account-private-key-file=/opt/k8s/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
EOF

systemd管理controller-manager

cat > /lib/systemd/system/kube-controller-manager.service << 'EOF'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-controller-manager.conf
ExecStart=/opt/k8s/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

7. 部署kube-scheduler组件

cat > /opt/k8s/cfg/kube-scheduler.conf << 'EOF'
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF

systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << 'EOF'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-scheduler.conf
ExecStart=/opt/k8s/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

8. 查看集群状态

# 如下输出说明Master节点组件运行正常
$ kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok  

9. 在Master上准备Worker环境所需

生成kubelet初次加入集群引导kubeconfig文件

$ KUBE_CONFIG="/opt/k8s/cfg/bootstrap.kubeconfig"
$ KUBE_APISERVER="https://172.17.89.23:6443"
$ TOKEN="cf2cdbdd07d06094e598d83d0b89006b"   # 与token.csv里保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
$ kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

生成kube-proxy.kubeconfig文件

$ cd ~/TLS/k8s

# 创建证书请求文件
$ cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 生成kubeconfig文件
$ KUBE_CONFIG="/opt/k8s/cfg/kube-proxy.kubeconfig"
$ KUBE_APISERVER="https://172.17.89.23:6443" # 如同一终端可忽略,上一步已设置

$ kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

在Woker Node操作

Master不作为Worker使用,以下操作在k8s-node7节点上,资源受限也可以在Master上。

$ mkdir -p /opt/k8s/{bin,cfg,ssl,logs}

# 把Master节点上的 kubernetes/server/bin/ 下的 kubelet 和 kube-proxy 同步到Worker上
$ cp kubelet kube-proxy /opt/k8s/bin/

# 把Master节点上生成的Node所需文件(在 /opt/k8s/cfg/ 下)同步到Worker上
$ cp bootstrap.kubeconfig kube-proxy.kubeconfig /opt/k8s/cfg/

# 把Master节点上的ca.pem(在 /opt/k8s/ssl/ 下)同步到Worker上
$ cp ca.pem /opt/k8s/ssl/

1. 安装containerd

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y containerd.io
containerd config default | tee /etc/containerd/config.toml
sed -i 's#k8s.gcr.io/pause#docker.io/staugur/pause#' /etc/containerd/config.toml
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd

//(可选)安装crictl工具

2. 部署kubelet

cat > /opt/k8s/cfg/kubelet.conf << 'EOF'
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--hostname-override=k8s-node7 \
--kubeconfig=/opt/k8s/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/k8s/cfg/bootstrap.kubeconfig \
--config=/opt/k8s/cfg/kubelet-config.yml \
--cert-dir=/opt/k8s/ssl \
--pod-infra-container-image=docker.io/staugur/pause:3.2 \
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock"
EOF

参数说明:

–hostname-override:显示名称,集群中唯一

–kubeconfig:空路径,会自动生成(即TLS bootstraping流程,批准后生成),用于连接apiserver

–bootstrap-kubeconfig:首次启动向apiserver申请证书

–config:配置参数文件

–cert-dir:kubelet证书生成目录

–pod-infra-container-image:管理Pod网络容器的镜像

–container-runtime:设定容器运行时,目前默认还是docker,需要改为remote

–container-runtime-endpoint:远程运行时的端点,即containerd的sock路径

配置参数文件

cat > /opt/k8s/cfg/kubelet-config.yml << 'EOF'
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/k8s/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

参数说明:

clusterDNS:根据实际修改,一般是Service网络第二个IP

systemd管理kubelet

cat > /lib/systemd/system/kubelet.service << 'EOF'
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/k8s/cfg/kubelet.conf
ExecStart=/opt/k8s/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

批准kubelet证书申请并加入集群

这一步切换到Master节点操作!

# 查看kubelet证书请求
$ kubectl get csr
NAME           AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-xxx   2m26s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准申请
$ kubectl certificate approve node-csr-xxx

# 查看节点(由于网络插件还没有部署,节点会没有准备就绪 NotReady)
$ kubectl get node
NAME        STATUS     ROLES    AGE    VERSION
k8s-node7   NotReady   <none>   3m6s   v1.18.3

3. 部署kube-proxy

cat > /opt/k8s/cfg/kube-proxy.conf << 'EOF'
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/k8s/logs \
--config=/opt/k8s/cfg/kube-proxy-config.yml"
EOF

配置参数文件

cat > /opt/k8s/cfg/kube-proxy-config.yml << 'EOF'
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/k8s/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node7
clusterCIDR: 10.244.0.0/16
mode: iptables
EOF

参数说明:

hostnameOverride:显示名称,集群中唯一

clusterCIDR:特别说明这个参数,

  • 官网描述是:clusterCIDR is the CIDR range of the pods in the cluster, 它是集群中 Pod 的 CIDR 范围,但是很多文档变成了 Service 的 CIDR 范围。

  • 这在ipvs模式下有问题的,node/pod访问默认Service ClusterIP超时,比如按照flannel网络, 死活起不来,pod状态是CrashLoopBackOff,错误日志是 failed to create subnetmanager: error retrieving pod spec, dial tcp 10.0.0.1:443 i/o timeout

systemd管理kube-proxy

cat > /lib/systemd/system/kube-proxy.service << 'EOF'
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/k8s/cfg/kube-proxy.conf
ExecStart=/opt/k8s/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

4. 安装cni-plugins

mkdir -p /opt/cni/bin
version="0.9.1"
wget -c https://github.com/containernetworking/plugins/releases/download/v${version}/cni-plugins-linux-amd64-v${version}.tgz
tar zxf cni-plugins-linux-amd64-v${version}.tgz -C /opt/cni/bin/

至此,Worker节点大致是这样,增加节点按照本部分内容操作即可。

//(省略)依次安装k8s-node8节点,注意更改主机名(hostname-override)


部署CNI网络及其他可选组件(在Master上操作)

部署flannel网络组件

使用flannel,部署到pod中(使用v0.13,注意flannel默认网络与上述集群网络一致)

KUBE_FLANNEL=https://raw.githubusercontent.com/flannel-io/flannel/v0.13.0/Documentation/kube-flannel.yml
kubectl apply -f $KUBE_FLANNEL

Pod部署完成后,kubectl get node状态应该是Ready

$ kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
k8s-node7   Ready    <none>   13m   v1.18.3

授权apiserver访问kubelet

以便apiserver可以调用node的kubelet接口

# 可以放到yaml集中目录
$ cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

$ kubectl apply -f apiserver-to-kubelet-rbac.yaml

部署CoreDNS组件

CoreDNS用于集群内部Service名称解析。

# 安装依赖 jq 命令(coredns deploy.sh脚本生成yaml用到此命令)
$ yum install jq -y

# 下载 CoreDNS 部署项目
$ git clone https://github.com/coredns/deployment.git
$ cd coredns/deployment/kubernetes
$ ./deploy.sh -i 10.0.0.2 | kubectl apply -f -

# 查看结果
$ kubectl get svc -n kube-system -o wide
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
kube-dns   ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP,9153/TCP   3m26s   k8s-app=kube-dns

$ kubectl get pods -n kube-system -l k8s-app=kube-dns -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
coredns-6ff445f54-cg2zj   1/1     Running   0          9m16s   10.244.0.2   k8s-node7   <none>           <none>

默认情况下是自动获取kube-dns的集群ip的,但是由于没有部署kube-dns所以只能手动指定一个 集群ip(前面kubelet规划的IP地址),否则会报错。

完结

目前整体文件结构、服务状态如下:

  • Master /opt/k8s
$ tree /opt/k8s/
/opt/k8s/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig    # 实际对Master无用
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-proxy.kubeconfig   # 实际对Master无用
│   ├── kube-scheduler.conf
│   └── token.csv
├── logs
│   ├── ignore...
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
  • Worker /opt/k8s
$ tree /opt/k8s/
/opt/k8s/
├── bin
│   ├── kubelet
│   └── kube-proxy
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kubelet.kubeconfig
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   └── kube-proxy.kubeconfig
├── logs
│   └── ignore...
└── ssl
    ├── ca.pem
    ├── kubelet-client-2021-05-11-17-14-05.pem
    ├── kubelet-client-current.pem -> /opt/k8s/ssl/kubelet-client-2021-05-11-17-14-05.pem
    ├── kubelet.crt
    └── kubelet.key

$ tree /opt/cni
/opt/cni/
└── bin
    ├── bandwidth
    ├── bridge
    ├── dhcp
    ├── firewall
    ├── flannel
    ├── host-device
    ├── host-local
    ├── ipvlan
    ├── loopback
    ├── macvlan
    ├── portmap
    ├── ptp
    ├── sbr
    ├── static
    ├── tuning
    ├── vlan
    └── vrf
  • Master kubectl get status
$ kubectl get cs,node -o wide
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/scheduler            Healthy   ok                  

NAME             STATUS   ROLES    AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
node/k8s-node7   Ready    <none>   18h    v1.18.3   172.17.89.17   <none>        CentOS Linux 7 (Core)   3.10.0-514.6.2.el7.x86_64   containerd://1.4.4
node/k8s-node8   Ready    <none>   116m   v1.18.3   172.17.89.7    <none>        CentOS Linux 7 (Core)   3.10.0-514.6.2.el7.x86_64   containerd://1.4.4

$ kubectl get pod -n kube-system -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
coredns-6ff445f54-cg2zj   1/1     Running   0          9m45s   10.244.0.2     k8s-node7   <none>           <none>
kube-flannel-ds-7cssx     1/1     Running   0          18h     172.17.89.17   k8s-node7   <none>           <none>
kube-flannel-ds-nj6tj     1/1     Running   2          114m    172.17.89.7    k8s-node8   <none>           <none>

$ kubectl get svc --all-namespaces 
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP                  18h
kube-system   kube-dns     ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP,9153/TCP   10m

·End·