开篇废话

Master 组成部分

· API Server:负责整个集群的统一入口,协调各个组件工作,以 RESTful API 提供接口服务,所有对象资源的增删改查和监听操作都交给 API Server 处理后再提交给 Etcd 存储。
· Kube-Scheduler:根据调度算法为新创建的 Pod 选择一个 Node 节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。
· Controller-Manager:负责集群中常规后台任务,一个资源对应一个控制器,而 Controller-Manager 则负责管理这些控制器。
· Etcd:分布式键值存储系统,用于保存集群状态数据,比如 PodService 等对象信息。
· Docker: 一个用 Go 语言实现的开源项目,可以让我们方便的创建和使用容器,Docker 将程序以及程序所有的依赖都打包到 Docker Container,这样你的程序可以在任何环境都会有一致的表现。
· Flanneld:一种基于 overlay 网络的跨主机容器网络解决方案,也就是将 TCP 数据包封装在另一种网络包里面进行路由转发和通信,Flannel 配合 etcd 可以实现不同宿主机上的 docker 容器内网 IP 的互通。
· Nginx:是一个高性能的 HTTP 和反向代理 web 服务器,同时也提供了IMAP/POP3/SMTP 服务。
· Kubectl:是 Kubernetes 自带的客户端,可以用它来直接作 Kubernetes 集群。

Node 组成部分

· Kubelet:是 MasterNode 节点上的 Agent,管理本机运行容器的生命周期,比如创建容器、Pod 挂载数据卷、下载 secret、获取容器和节点状态等,Kubelet 将每个 Pod 转换成一组容器。
· Kube-Proxy:在 Node 节点上实现 Pod 网络代理,维护网络规则和四层负载均衡工作。
· Docker: 一个用 Go 语言实现的开源项目,可以让我们方便的创建和使用容器,Docker 将程序以及程序所有的依赖都打包到 Docker Container,这样你的程序可以在任何环境都会有一致的表现。
· Flanneld:一种基于 overlay 网络的跨主机容器网络解决方案,也就是将 TCP 数据包封装在另一种网络包里面进行路由转发和通信,Flannel 配合 etcd 可以实现不同宿主机上的 docker 容器内网 IP 的互通。
· Nginx:是一个高性能的 HTTP 和反向代理 web 服务器,同时也提供了IMAP/POP3/SMTP 服务。


机器分配

主机名 IP 地址 组件
k8s-01 10.200.0.120 kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubelet,kube-proxy,docker,etcd,Flanneld,Nginx
k8s-02 10.200.0.121 kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubelet,kube-proxy,docker,etcd,Flanneld,Nginx
k8s-03 10.200.0.122 kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubelet,kube-proxy,docker,etcd,Flanneld,Nginx
k8s-04 10.200.0.123 kubelet,kube-proxy,docker,Flanneld,Nginx
k8s-05 10.200.0.124 kubelet,kube-proxy,docker,Flanneld,Nginx

拓扑图

202303031734838


网络规划

子网 网段 备注
NodeSubnet 10.200.0.0/24 宿主机节点子网
ServiceSubnet 10.96.0.0/16 SVC 子网
PodSubnet 10.97.0.0/16 POD 子网

节点初始化(所有节点)

# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
systemctl stop iptables && systemctl disable iptables

# 关闭 Selinux
setenforce 0 && sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

# 关闭 swap
sed -ri 's/.*swap.*/#&/' /etc/fstab

# 同步时间
(crontab -l;echo '*/30 * * * * /usr/sbin/ntpdate ntp1.aliyun.com && /usr/sbin/hwclock -w') | crontab

# 修改最大文件打开数和最大进程数
cat >> /etc/security/limits.d/nofile.conf <<EOF
* soft nofile 65536
* hard nofile 65536
EOF

echo "* - nofile 65535" >> /etc/security/limits.conf
echo "* - nproc 65536" >> /etc/security/limits.conf
sed -i 's#4096#65536#g' /etc/security/limits.d/20-nproc.conf

# 内核参数调优
cat >> /etc/sysctl.conf <<EOF
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 20480
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
net.core.somaxconn = 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_tw_buckets = 10240
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh1 = 2048
net.ipv4.neigh.default.gc_thresh1 = 4096
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 1048576
fs.file-max = 52706963
fs.nr_open = 52706963
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
EOF

sysctl -p >/dev/null 2>&1

# 加载特定的内核模块
cat >> /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

# 设置特定的系统参数
cat >> /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 立即加载对应模块
modprobe overlay
modprobe br_netfilter

# 检查模块是否加载成功
lsmod | grep br_netfilter
lsmod | grep overlay

# 添加 hosts 
cat >> /etc/hosts << EOF
10.200.0.120 k8s-01
10.200.0.121 k8s-02
10.200.0.122 k8s-03
10.200.0.123 k8s-04
10.200.0.124 k8s-05
EOF

# 配置免密登录(每台机器都做)
ssh-keygen
ssh-copy-id k8s-01
ssh-copy-id k8s-02
ssh-copy-id k8s-03
ssh-copy-id k8s-04
ssh-copy-id k8s-05

# 安装一些组件
yum install -y epel-release

sed -e 's|^mirrorlist=|#mirrorlist=|g' \
         -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
         -i.bak \
         /etc/yum.repos.d/CentOS-*.repo

sed -e 's!^metalink=!#metalink=!g' \
    -e 's!^#baseurl=!baseurl=!g' \
    -e 's!http://download\.fedoraproject\.org/pub/epel!https://mirrors.tuna.tsinghua.edu.cn/epel!g' \
    -e 's!http://download\.example/pub/epel!https://mirrors.tuna.tsinghua.edu.cn/epel!g' \
    -i /etc/yum.repos.d/epel*.repo

yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git gcc-c++ make yum-utils testice-mapper-persistent-data lvm2 bash-completion nfs-utils lrzsz

# 安装 docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
yum -y install docker-ce

# 更新一下系统
yum update -y

# 升级内核(内核版本至少5.4)
# 载入公钥、安装 elrepo
rpm -import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

# 载入 elrepo-kernel 元数据
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist

# 查看可用内核
yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*

# 安装指定内核
yum remove -y kernel-tools-libs.x86_64 kernel-tools.x86_64
yum -y --enablerepo=elrepo-kernel install kernel-lt.x86_64 kernel-lt-tools.x86_64

# 查看可用内核
awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg

# 指定内核启动(从0开始)
grub2-set-default 0

# 创建内核配置文件
grub2-mkconfig -o /boot/grub2/grub.cfg

# 重启并更新系统
init 6

部署 Master 节点

部署 Etcd

  • k8s-01 上执行
# 准备 etcd 二进制文件
cd /usr/local/src/ && wget https://github.com/etcd-io/etcd/releases/download/v3.3.25/etcd-v3.3.25-linux-amd64.tar.gz

tar xvf etcd-v3.3.25-linux-amd64.tar.gz
mv etcd-v3.3.25-linux-amd64/etcd /usr/bin/ && mv etcd-v3.3.25-linux-amd64/etcdctl /usr/bin/

chmod +x /usr/bin/etcd /usr/bin/etcdctl

# 准备 cfssl 相关二进制文件
wget https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssl-certinfo_linux-amd64 && wget https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssljson_linux-amd64 && wget https://github.com/cloudflare/cfssl/releases/download/1.2.0/cfssl_linux-amd64

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo && mv cfssl_linux-amd64 /usr/bin/cfssl && mv cfssljson_linux-amd64 /usr/bin/cfssljson

chmod +x /usr/bin/cfssl-certinfo /usr/bin/cfssl /usr/bin/cfssljson

# 编写证书配置文件
mkdir -p /etc/kubernetes/cert/
cd /etc/kubernetes/cert/
cat > ca-config.json << "EOF"
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "expiry": "876000h",
        "usages": ["signing", "key encipherment", "server auth", "client auth"]
      }
    }
  }
}
EOF

cat > etcd-csr.json << "EOF"
{
  "CN": "etcd",
  "hosts": ["127.0.0.1", "10.200.0.120", "10.200.0.121", "10.200.0.122"],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "devops",
      "OU": "System"
    }
  ]
}
EOF

cat > ca-csr.json << "EOF"
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "devops",
      "OU": "System"
    }
  ]
}
EOF

# 生成 etcd 证书文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

# 创建 etcd 启动文件
mkdir -p /data/etcd/data.etcd
cat > /etc/systemd/system/etcd.service << "EOF"
[Unit]
Description=Etcd Server # 服务描述
After=network.target # 服务启动依赖,网络目标之后启动
After=network-online.target # 确保网络完全在线后启动
Wants=network-online.target # 希望在网络完全在线后启动
Documentation=https://github.com/coreos # 文档链接

[Service]
Type=notify # systemd 等待服务的启动通知
WorkingDirectory=/data/etcd # 服务工作目录
ExecStart=/usr/bin/etcd \ # 启动命令和参数
  --data-dir=/data/etcd/data.etcd \ # 数据存储目录
  --name=k8s-01 \ # 当前节点的名字
  --cert-file=/etc/kubernetes/cert/etcd.pem \ # TLS 证书文件
  --key-file=/etc/kubernetes/cert/etcd-key.pem \ # TLS 私钥文件
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \ # 信任的 CA 证书文件
  --peer-cert-file=/etc/kubernetes/cert/etcd.pem \ # 对等通信的证书文件
  --peer-key-file=/etc/kubernetes/cert/etcd-key.pem \ # 对等通信的私钥文件
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \ # 对等通信信任的 CA 证书
  --peer-client-cert-auth=true \ # 开启对等客户端证书认证
  --client-cert-auth=true \ # 开启客户端证书认证
  --listen-peer-urls=https://10.200.0.120:2380 \ # 对等通信监听地址和端口
  --initial-advertise-peer-urls=https://10.200.0.120:2380 \ # 初始对等广播地址和端口
  --listen-client-urls=https://10.200.0.120:2379,http://127.0.0.1:2379 \ # 客户端通信监听地址和端口
  --advertise-client-urls=https://10.200.0.120:2379 \ # 客户端广播地址和端口
  --initial-cluster-token=etcd-cluster \ # 集群 token,用于创建或加入一个集群
  --initial-cluster=k8s-01=https://10.200.0.120:2380,k8s-02=https://10.200.0.121:2380,k8s-03=https://10.200.0.122:2380 \ # 初始集群配置,节点名=节点通讯地址
  --initial-cluster-state=new \ # 集群初始状态,new 表示新建集群
  --auto-compaction-mode=periodic \ # 自动压缩模式,定期压缩旧版本数据
  --auto-compaction-retention=1 \ # 自动压缩保留的历史数据天数
  --max-request-bytes=33554432 \ # 单个请求最大字节数
  --quota-backend-bytes=6442450944 \ # 后端数据库大小限制
  --heartbeat-interval=250 \ # 心跳间隔(毫秒)
  --election-timeout=2000 # 选举超时时间(毫秒)
Restart=on-failure # 失败时重启策略
RestartSec=5 # 重启等待时间(秒)
LimitNOFILE=65536 # 文件描述符限制

[Install]
WantedBy=multi-user.target # 安装目标,多用户环境
EOF

解释一下 ca-config.json

  1. signing:这个部分定义了 CA 签署证书时使用的策略和默认设置。
  2. default:这是默认的签名配置,应用于没有指定特定配置文件(profile)的证书签名请求(CSR)。
    • expiry"876000h" 定义了签名的证书的默认有效期,这里设置为100年(小时为单位)。这是一个非常长的时间,通常生产环境中会使用更短的有效期,例如1年("8760h")或者更短。
  3. profiles:这部分可以定义多个不同的签名配置,每个配置可以有不同的参数。
  4. kubernetes:这是一个自定义的配置文件(profile),可以在签发特定证书时引用。
    • expiry:同样设置为 "876000h",意味着引用 kubernetes 配置文件的证书将具有相同的长期有效期。
    • usages:定义了证书可以用于的用途,包括:
      • signing:表示证书可以用来签署其它证书或文档。
      • key encipherment:表示证书的密钥可以用于加密密钥交换过程中的密钥。
      • server auth:表示证书可以用来验证服务器的身份,允许证书用于服务器 SSL/TLS 身份验证。
      • client auth:表示证书可以用来验证客户端的身份,允许证书用于客户端 SSL/TLS 身份验证。

解释一下 etcd-csr.json

  1. CN ("Common Name"): "etcd" 表示证书的通用名称,它通常用于指定证书所有者的身份,在这个场景中是 etcd 集群的身份。
  2. hosts: 这个数组列出了应当在证书中包含的主机名和 IP 地址。这些是 etcd 成员可以被客户端安全连接的地址。在这里,它包括了本地地址 127.0.0.1 和三个其他的网络地址。当 etcd 服务器或其客户端建立 TLS 连接时,TLS 握手过程会检查服务器证书中的主机名或 IP 地址是否与正在连接的服务器地址匹配。
  3. key:
    • algo: "rsa" 指定了密钥的算法,这里使用的是 RSA
    • size: 2048 指定了生成密钥的位数,这里是 2048 位。这是一个安全强度合理的密钥大小,可以保证安全的同时不会对性能产生太大影响。
  4. names: 这个数组定义了证书所有者的一些组织信息。
    • C ("Country"): "CN" 表示证书所有者所在的国家,这里是 China 的代码。
    • ST ("State"): "GuangDong" 表示证书所有者所在的州或省份。
    • L ("Locality"): "ShenZhen" 表示证书所有者所在的城市。
    • O ("Organization"): "devops" 表示证书所有者所属的组织。
    • OU ("Organizational Unit"): "System" 表示证书所有者所属的组织单元。
  • 分别在 k8s-02k8s-03 上执行
mkdir -p /data/etcd/data.etcd && mkdir -p /etc/kubernetes/cert
  • k8s-01 上执行
# 拷贝相关文件到 k8s-02
scp -r /etc/kubernetes/cert/etcd.pem k8s-02:/etc/kubernetes/cert/etcd.pem
scp -r /etc/kubernetes/cert/etcd-key.pem k8s-02:/etc/kubernetes/cert/etcd-key.pem
scp -r /etc/kubernetes/cert/ca.pem k8s-02:/etc/kubernetes/cert/ca.pem
scp -r /etc/systemd/system/etcd.service k8s-02:/etc/systemd/system/etcd.service
scp -r /usr/bin/etcd k8s-02:/usr/bin/etcd
scp -r /usr/bin/etcdctl k8s-02:/usr/bin/etcdctl

# 拷贝相关文件到 k8s-03
scp -r /etc/kubernetes/cert/etcd.pem k8s-03:/etc/kubernetes/cert/etcd.pem
scp -r /etc/kubernetes/cert/etcd-key.pem k8s-03:/etc/kubernetes/cert/etcd-key.pem
scp -r /etc/kubernetes/cert/ca.pem k8s-03:/etc/kubernetes/cert/ca.pem
scp -r /etc/systemd/system/etcd.service k8s-03:/etc/systemd/system/etcd.service
scp -r /usr/bin/etcd k8s-03:/usr/bin/etcd
scp -r /usr/bin/etcdctl k8s-03:/usr/bin/etcdctl
  • 分别在 k8s-01k8s-02k8s-03 上执行
# 启动 k8s-01 etcd
# 执行完之后会卡着,正常现象,切换到 k8s-02 继续执行
systemctl daemon-reload
systemctl start etcd

# 编辑 k8s-02 etcd 启动文件
vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/data/etcd
ExecStart=/usr/bin/etcd \
  --data-dir=/data/etcd/data.etcd \
  --name=k8s-02 \
  --cert-file=/etc/kubernetes/cert/etcd.pem \
  --key-file=/etc/kubernetes/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/kubernetes/cert/etcd.pem \
  --peer-key-file=/etc/kubernetes/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth=true \
  --client-cert-auth=true \
  --listen-peer-urls=https://10.200.0.121:2380 \
  --initial-advertise-peer-urls=https://10.200.0.121:2380 \
  --listen-client-urls=https://10.200.0.121:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://10.200.0.121:2379 \
  --initial-cluster-token=etcd-cluster \
  --initial-cluster=k8s-01=https://10.200.0.120:2380,k8s-02=https://10.200.0.121:2380,k8s-03=https://10.200.0.122:2380 \
  --initial-cluster-state=new \
  --auto-compaction-mode=periodic \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 启动 k8s-02 etcd
# 执行完之后会卡着,正常现象,切换到 k8s-03 继续执行
chmod +x /usr/bin/etcd /usr/bin/etcdctl
systemctl daemon-reload
systemctl start etcd

# 编辑 k8s-03 etcd 启动文件
vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/data/etcd
ExecStart=/usr/bin/etcd \
  --data-dir=/data/etcd/data.etcd \
  --name=k8s-03 \
  --cert-file=/etc/kubernetes/cert/etcd.pem \
  --key-file=/etc/kubernetes/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/kubernetes/cert/etcd.pem \
  --peer-key-file=/etc/kubernetes/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth=true \
  --client-cert-auth=true \
  --listen-peer-urls=https://10.200.0.122:2380 \
  --initial-advertise-peer-urls=https://10.200.0.122:2380 \
  --listen-client-urls=https://10.200.0.122:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://10.200.0.122:2379 \
  --initial-cluster-token=etcd-cluster \
  --initial-cluster=k8s-01=https://10.200.0.120:2380,k8s-02=https://10.200.0.121:2380,k8s-03=https://10.200.0.122:2380 \
  --initial-cluster-state=new \
  --auto-compaction-mode=periodic \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 启动 k8s-03 etcd
# 执行完这个,etcd 应该起来了
chmod +x /usr/bin/etcd /usr/bin/etcdctl
systemctl daemon-reload
systemctl start etcd

# 分别配置为开机自启动,k8s-01、k8s-02、k8s-03 都执行
systemctl enable etcd
  • 随便一个 etcd 节点上执行命令检测 etcd 集群是否正常
etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem --endpoints="https://10.200.0.120:2379,https://10.200.0.121:2379,https://10.200.0.122:2379" cluster-health

member 36b21fa6b2b86b16 is healthy: got healthy result from https://10.200.0.121:2379
member 913e07ae55e548e4 is healthy: got healthy result from https://10.200.0.120:2379
member 9598d8c74eae4f43 is healthy: got healthy result from https://10.200.0.122:2379
cluster is healthy

部署 Flanneld

  • 分别在 k8s-01k8s-02k8s-03 上执行
# 启动 docker
mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl start docker
systemctl enable docker
  • k8s-01 上执行
# 下载 flannel
cd /usr/local/src/ && wget https://github.com/flannel-io/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz && 
tar xvf flannel-v0.11.0-linux-amd64.tar.gz && mv flanneld /usr/bin/ && mv mk-docker-opts.sh /usr/bin/ && chmod +x /usr/bin/flanneld /usr/bin/mk-docker-opts.sh

# 生成 flannel 证书文件
cd /etc/kubernetes/cert/
cat > flanneld-csr.json << "EOF"
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "devops",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

# 配置 POD 子网
etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem --endpoints="https://10.200.0.120:2379,https://10.200.0.121:2379,https://10.200.0.122:2379" set /kubernetes/network/config '{ "Network": "10.97.0.0/16", "Backend": {"Type": "vxlan"}}'

# 配置 flanneld 启动文件
cat > /etc/systemd/system/flanneld.service << "EOF"
[Unit]
Description=Flanneld overlay address etcd agent # 服务描述
After=network.target # 网络启动后开始启动
After=network-online.target # 在网络完全在线之后开始启动
Wants=network-online.target # 表示这个服务想要在网络完全在线后启动
After=etcd.service # 在 etcd 服务之后启动
Before=docker.service # 在 Docker 服务之前启动

[Service]
Type=notify # systemd 会等待服务发送启动完成的信号
ExecStart=/usr/bin/flanneld \ # 启动命令
  -etcd-cafile=/etc/kubernetes/cert/ca.pem \ # 指定 etcd 的 CA 证书文件
  -etcd-certfile=/etc/kubernetes/cert/flanneld.pem \ # 指定 flanneld 的客户端证书文件
  -etcd-keyfile=/etc/kubernetes/cert/flanneld-key.pem \ # 指定 flanneld 的客户端密钥文件
  -etcd-endpoints=https://10.200.0.120:2379,https://10.200.0.121:2379,https://10.200.0.122:2379 \ # 指定 etcd 服务的地址
  -etcd-prefix=/kubernetes/network \ # 指定存储网络配置的 etcd 前缀
  -iface=eth0 \ # 指定 flannel 绑定的网络接口
  -ip-masq # 开启 IP Masquerade,对出集群的数据包进行源 IP 伪装
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker # 在 flanneld 启动后执行,用于生成 Docker 的网络配置
Restart=always # 总是重启
RestartSec=5 # 重启间隔秒数
StartLimitInterval=0 # 服务重启尝试没有时间限制

[Install]
WantedBy=multi-user.target # 在多用户目标下启动
RequiredBy=docker.service # Docker 服务依赖此服务
EOF

# 修改 docker 启动文件参数
vim /usr/lib/systemd/system/docker.service
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# 指定了一个环境文件,systemd 服务在启动之前会从这个文件中读取环境变量。
# 在这个例子中,它会加载由 Flannel 生成的 Docker 网络配置环境变量。
# 这些变量会告诉 Docker 如何配置其网络以与 Flannel 网络覆盖一起使用。
EnvironmentFile=/run/flannel/docker
# ExecStart 指定了启动服务时执行的命令,在这里是启动 Docker 守护进程。
# $DOCKER_NETWORK_OPTIONS 是从 /run/flannel/docker 文件中读取的环境变量,
# 包含了 Docker 需要使用的网络选项,这样 Docker 就可以正确地与 Flannel 网络一起工作。
# 这允许 Docker 容器使用由 Flannel 管理的网络覆盖。
ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS

# 启动 flannel 和 docker
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

# 把对应文件拷贝到 k8s-02 和 k8s-03
scp -r /usr/bin/flanneld 10.200.0.121:/usr/bin/flanneld
scp -r /usr/bin/flanneld 10.200.0.122:/usr/bin/flanneld
scp -r /usr/bin/mk-docker-opts.sh 10.200.0.121:/usr/bin/mk-docker-opts.sh
scp -r /usr/bin/mk-docker-opts.sh 10.200.0.121:/usr/bin/mk-docker-opts.sh
scp -r /etc/kubernetes/cert/flanneld.pem 10.200.0.121:/etc/kubernetes/cert/flanneld.pem
scp -r /etc/kubernetes/cert/flanneld.pem 10.200.0.122:/etc/kubernetes/cert/flanneld.pem
scp -r /etc/kubernetes/cert/flanneld-key.pem 10.200.0.121:/etc/kubernetes/cert/flanneld-key.pem 
scp -r /etc/kubernetes/cert/flanneld-key.pem 10.200.0.122:/etc/kubernetes/cert/flanneld-key.pem 
scp -r /usr/lib/systemd/system/docker.service 10.200.0.121:/usr/lib/systemd/system/docker.service
scp -r /usr/lib/systemd/system/docker.service 10.200.0.122:/usr/lib/systemd/system/docker.service
scp -r /etc/systemd/system/flanneld.service 10.200.0.121:/etc/systemd/system/flanneld.service
scp -r /etc/systemd/system/flanneld.service 10.200.0.122:/etc/systemd/system/flanneld.service
  • k8s-02k8s-03 上执行
# 授权,启动 flanneld,重启 docker
chmod +x /usr/bin/flanneld /usr/bin/mk-docker-opts.sh
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

部署 kube-apiserver

  • k8s-01 上执行
# 生成相关证书文件
cd /etc/kubernetes/cert
cat > kubernetes-csr.json << "EOF"
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "k8s-01",
    "k8s-02",
    "k8s-03",
    "10.96.0.1",
    "1.114.114.114",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "devops",
      "OU": "System"
    }
  ]
}
EOF

cat > proxy-client-csr.json << "EOF"
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "devops",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client

# 生成密钥
head -c 32 /dev/urandom | base64 > secret

cat secret
aee44b222621a036d79b0d3c87169c7f

# 编写 encryption-config.yaml
cat > /etc/kubernetes/encryption-config.yaml << "EOF"
kind: EncryptionConfig # 文件类型,表示这是一个加密配置
apiVersion: v1 # API 版本
resources: # 加密资源的列表
  - resources: # 加密资源的列表
      - secrets # 这里指定加密的资源是 Secrets
    providers: # 提供加密的提供者列表
      - aescbc: # 指定使用 AES-CBC 模式进行加密
          keys: # 加密所使用的密钥列表
            - name: key1 # 密钥的名称
              secret: aee44b222621a036d79b0d3c87169c7f # 密钥的值,需要是一个 base64 编码的随机字符串
      - identity: {} # 一个回退提供者,如果设置为第一个,则不进行加密
EOF

# 编写 audit-policy.yaml
cat > /etc/kubernetes/audit-policy.yaml << "EOF"
apiVersion: audit.k8s.io/v1beta1 # 审计策略的 API 版本
kind: Policy # 表示这是一个审计策略对象
rules: # 审计策略规则列表
  # 不记录 kube-proxy 用户对 endpoints 和 services 的 watch 操作。
  - level: None # 不记录审计日志
    resources: # 定义资源类型
      - group: "" # 指定 API 组,空字符串表示核心组
        resources: # 具体资源列表
          - endpoints
          - services
          - services/status
    users: # 适用的用户列表
      - 'system:kube-proxy'
    verbs: # 适用的操作列表
      - watch
  # 不记录 system:nodes 组的用户对 nodes 资源的 get 操作。
  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups: # 适用的用户组列表
      - 'system:nodes'
    verbs:
      - get
  # kube-system 命名空间中不记录某些系统账户对 endpoints 的 get 和 update 操作。
  - level: None
    namespaces: # 适用的命名空间列表
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users: # 适用的用户列表
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update
  # 不记录 system:apiserver 用户对命名空间相关资源的操作。
  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch
  # 对于已知 API 组的所有资源,记录完整的请求和响应。
  - level: RequestResponse # 记录请求和响应的完整内容
    omitStages: # 忽略的阶段
      - RequestReceived
    resources: # 资源组列表
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
  # 对于所有其他请求,只记录元数据。
  - level: Metadata # 只记录请求的元数据
    omitStages: # 忽略的阶段
      - RequestReceived
EOF

# 安装 kube-apiserver
cd /usr/local/src/ && wget https://dl.k8s.io/v1.17.16/kubernetes-server-linux-amd64.tar.gz && tar xvf kubernetes-server-linux-amd64.tar.gz

cp kubernetes/server/bin/kube-apiserver /usr/bin/kube-apiserver
cp kubernetes/server/bin/kube-controller-manager /usr/bin/kube-controller-manager
cp kubernetes/server/bin/kube-scheduler /usr/bin/kube-scheduler
cp kubernetes/server/bin/kubelet /usr/bin/kubelet
cp kubernetes/server/bin/kubectl /usr/bin/kubectl
cp kubernetes/server/bin/kubeadm /usr/bin/kubeadm
cp kubernetes/server/bin/kube-proxy /usr/bin/kube-proxy

chmod +x /usr/bin/kube-apiserver /usr/bin/kube-controller-manager /usr/bin/kube-scheduler /usr/bin/kubelet /usr/bin/kubectl /usr/bin/kubeadm /usr/bin/kube-proxy

# 配置 kubectl 命令补全
kubectl completion bash | tee /etc/bash_completion.d/kubectl > /dev/null
source /etc/bash_completion.d/kubectl

# 配置 kube-apiserver 启动文件
mkdir -p /data/kube-apiserver
cat > /etc/systemd/system/kube-apiserver.service << "EOF"
[Unit]
Description=Kubernetes API Server # 服务描述
Documentation=https://github.com/GoogleCloudPlatform/kubernetes # 文档链接
After=network.target # 在网络服务启动后启动

[Service]
WorkingDirectory=/data/kube-apiserver # 工作目录
ExecStart=/usr/bin/kube-apiserver \ # 启动命令
  --advertise-address=10.200.0.120 \ # 对外宣告的 API 服务器 IP 地址
  --default-not-ready-toleration-seconds=360 \ # Pod 对于未准备好的节点的容忍时间
  --default-unreachable-toleration-seconds=360 \ # Pod 对于无法访问的节点的容忍时间
  --enable-aggregator-routing=true \ # 启用聚合层路由
  --feature-gates=DynamicAuditing=true \ # 动态审核特性开关
  --max-mutating-requests-inflight=2000 \ # 最大同时处理的变更请求数
  --max-requests-inflight=4000 \ # 最大同时处理的非变更请求数
  --default-watch-cache-size=200 \ # 默认的 watch 缓存大小
  --delete-collection-workers=2 \ # 删除集合的工作线程数
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \ # 加密配置文件路径
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \ # etcd CA 证书文件路径
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \ # API 服务器访问 etcd 的证书
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \ # 对应的密钥文件
  --etcd-servers=https://10.200.0.120:2379,https://10.200.0.121:2379,https://10.200.0.122:2379 \ # etcd 服务器地址列表
  --bind-address=10.200.0.120 \ # API 服务器监听地址
  --secure-port=6443 \ # 安全端口
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \ # TLS 证书文件
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \ # TLS 私钥文件
  --insecure-port=8001 \ # 不安全的监听端口(不推荐使用)
  --audit-dynamic-configuration \ # 启用动态审计配置
  --audit-log-maxage=15 \ # 审计日志的最大保留天数
  --audit-log-maxbackup=3 \ # 审计日志文件的最大备份数量
  --audit-log-maxsize=100 \ # 审计日志文件的最大大小(MB)
  --audit-log-truncate-enabled \ # 启用审计日志截断
  --audit-log-path=/data/kube-apiserver/audit.log \# 审计日志文件路径
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \ # 审计策略文件路径
  --profiling \ # 启用性能分析
  --anonymous-auth=false \ # 禁用匿名访问
  --client-ca-file=/etc/kubernetes/cert/ca.pem \ # 客户端证书认证的 CA 文件
  --enable-bootstrap-token-auth \ # 启用引导令牌认证
  --requestheader-allowed-names="aggregator" \ # 允许的请求头名称
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \ # 请求头中客户端 CA 证书文件
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \ # 请求头中额外信息的前缀
  --requestheader-group-headers=X-Remote-Group \ # 请求头中表示用户组的头字段
  --requestheader-username-headers=X-Remote-User \ # 请求头中表示用户名的头字段
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \ # 服务账户的密钥文件
  --authorization-mode=Node,RBAC \ # 授权模式
  --runtime-config=api/all=true \ # 运行时配置
  --enable-admission-plugins=NodeRestriction \ # 启用的准入控制插件
  --allow-privileged=true \ # 允许特权模式
  --apiserver-count=3 \ # API 服务器实例数
  --event-ttl=72h \ # 事件的存活时间
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \ # kubelet 的 CA 证书
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \ # 与 kubelet 通信的客户端证书
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \ # 对应的密钥文件
  --kubelet-https=true \ # 与 kubelet 的通信使用 HTTPS
  --kubelet-timeout=10s \ # 与 kubelet 通信的超时时间
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \ # 代理客户端证书文件
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \ # 对应的密钥文件
  --service-cluster-ip-range=10.96.0.0/16 \ # 服务的 IP 范围
  --service-node-port-range=30000-32767 \ # NodePort 类型服务的端口范围
  --logtostderr=true \ # 日志输出到标准错误
  --v=2 # 日志级别
Restart=on-failure # 失败时重启
RestartSec=10 # 重启间隔秒数
Type=notify # 服务类型,notify 表示 systemd 会等待服务的启动通知
LimitNOFILE=65536 # 文件描述符数量限制

[Install]
WantedBy=multi-user.target # 安装目标
EOF

# 启动 kube-apiserver
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

部署 kube-controller-manager

  • k8s-01 上执行
# 生成证书
cd /etc/kubernetes/cert/
cat > kube-controller-manager-csr.json << "EOF"
{
  "CN": "system:kube-controller-manager",
  "hosts": ["127.0.0.1", "k8s-01", "k8s-02", "k8s-03"],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "system:kube-controller-manager",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

# 生成 kube-controller-manager.kubeconfig
KUBE_CONFIG="/etc/kubernetes/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://127.0.0.1:8443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=/etc/kubernetes/cert/kube-controller-manager.pem \
  --client-key=/etc/kubernetes/cert/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}

kubectl config use-context system:kube-controller-manager --kubeconfig=${KUBE_CONFIG}

# 部署 nginx
cd /usr/local/src/ && wget https://nginx.org/download/nginx-1.24.0.tar.gz
tar xvf nginx-1.24.0.tar.gz && cd nginx-1.24.0

./configure --with-stream --without-http --prefix=/usr/local/nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module

make && make install

echo '' > /usr/local/nginx/conf/nginx.conf
vim /usr/local/nginx/conf/nginx.conf
worker_processes  1;

events {
    worker_connections  10240;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 10.200.0.120:6443  max_fails=3 fail_timeout=30s;
        server 10.200.0.121:6443  max_fails=3 fail_timeout=30s;
        server 10.200.0.122:6443  max_fails=3 fail_timeout=30s;
    }
    server {
        listen  8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}

vim /etc/systemd/system/nginx.service
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

systemctl start nginx && systemctl enable nginx

# 编写 kube-controller-manager 启动文件
cat > /etc/systemd/system/kube-controller-manager.service << "EOF"
[Unit]
Description=Kubernetes Controller Manager # 服务描述
Documentation=https://github.com/GoogleCloudPlatform/kubernetes # 文档链接

[Service]
WorkingDirectory=/data/kube-controller-manager # 工作目录
ExecStart=/usr/bin/kube-controller-manager \ # 启动命令
  --profiling \ # 启用性能分析
  --cluster-name=kubernetes \ # 集群名称
  --controllers=*,bootstrapsigner,tokencleaner \ # 启动的控制器列表
  --kube-api-qps=1000 \ # 对 API server 的查询频率限制
  --kube-api-burst=2000 \ # 允许突增的最大查询数
  --leader-elect \ # 启用领导选举以确保高可用性
  --use-service-account-credentials=true \ # 使用服务账户凭证进行 API 调用
  --concurrent-service-syncs=2 \ # 同时同步的服务数量
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \ # TLS 证书文件
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \ # TLS 私钥文件
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \ # 认证信息的 kubeconfig 文件
  --client-ca-file=/etc/kubernetes/cert/ca.pem \ # 客户端 CA 证书
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \ # 请求头客户端 CA 证书
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \ # 请求头中额外信息的前缀
  --requestheader-group-headers=X-Remote-Group \ # 请求头中表示用户组的头字段
  --requestheader-username-headers=X-Remote-User \ # 请求头中表示用户名的头字段
  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \ # 授权信息的 kubeconfig 文件
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \ # 集群签名证书文件
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \ # 集群签名密钥文件
  --experimental-cluster-signing-duration=876000h \ # 集群签名证书的有效时长(实验性质)
  --horizontal-pod-autoscaler-sync-period=10s \ # 水平 Pod 自动扩缩同步周期
  --concurrent-deployment-syncs=10 \ # 同时同步的 Deployment 数量
  --concurrent-gc-syncs=30 \ # 同时同步的垃圾收集任务数量
  --node-cidr-mask-size=24 \ # 分配给节点的 CIDR 掩码大小
  --service-cluster-ip-range=10.96.0.0/16 \ # 服务集群 IP 范围
  --pod-eviction-timeout=6m \ # Pod 驱逐超时时间
  --terminated-pod-gc-threshold=10000 \ # 终止 Pod 的垃圾回收阈值
  --root-ca-file=/etc/kubernetes/cert/ca.pem \ # 根 CA 证书文件
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \ # 服务账户私钥文件
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \ # kubeconfig 文件
  --logtostderr=true \ # 日志输出到标准错误
  --v=2 # 日志级别
Restart=on-failure # 失败时重启
RestartSec=5 # 重启间隔秒数

[Install]
WantedBy=multi-user.target # 安装目标
EOF

# 启动
mkdir -p /data/kube-controller-manager
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

部署 kube-scheduler

  • k8s-01 上执行
# 生成证书
cd /etc/kubernetes/cert/
cat > kube-scheduler-crs.json << "EOF"
{
  "CN": "system:kube-scheduler",
  "hosts": ["127.0.0.1", "k8s-01", "k8s-02", "k8s-03"],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

# 编写 kube-scheduler 配置文件
cat > /etc/kubernetes/kube-scheduler.yaml << "EOF"
apiVersion: kubescheduler.config.k8s.io/v1alpha1 # 配置文件API版本
kind: KubeSchedulerConfiguration # 配置文件种类
bindTimeoutSeconds: 600 # Pod 绑定超时的秒数
clientConnection: # 客户端连接配置
  burst: 200 # 在 QPS 基础上允许的临时增加的请求数
  kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig" # 指向 kube-scheduler 的 kubeconfig 文件
  qps: 100 # 每秒查询率限制
enableContentionProfiling: false # 是否启用争用(profiling)性能分析
enableProfiling: true # 是否启用性能分析
hardPodAffinitySymmetricWeight: 1 # 硬Pod亲和性对称权重
healthzBindAddress: 127.0.0.1:10251 # 健康检查绑定地址
leaderElection: # 领导选举配置
  leaderElect: true # 是否启用领导选举
metricsBindAddress: 10.200.0.120:10251 # 指标绑定地址
EOF

# 生成 kube-scheduler.kubeconfig
KUBE_CONFIG="/etc/kubernetes/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://127.0.0.1:8443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials system:kube-scheduler \
  --client-certificate=/etc/kubernetes/cert/kube-scheduler.pem \
  --client-key=/etc/kubernetes/cert/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context system:kube-scheduler --kubeconfig=${KUBE_CONFIG}

# 编写 kube-scheduler 启动文件
cat > /etc/systemd/system/kube-scheduler.service << "EOF"
[Unit]
Description=Kubernetes Scheduler # 服务描述
Documentation=https://github.com/GoogleCloudPlatform/kubernetes # 文档链接

[Service]
WorkingDirectory=/data/kube-scheduler # 工作目录
ExecStart=/usr/bin/kube-scheduler \ # 启动命令
  --config=/etc/kubernetes/kube-scheduler.yaml \ # 指向 kube-scheduler 配置文件
  --bind-address=10.200.0.120 \ # 绑定的IP地址
  --secure-port=10259 \ # 安全端口
  --port=0 \ # 弃用的非安全端口(设置为0表示不启用)
  --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \ # TLS证书文件
  --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \ # TLS私钥文件
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \ # 认证kubeconfig文件
  --client-ca-file=/etc/kubernetes/cert/ca.pem \ # 客户端CA证书文件
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \ # 请求头中的CA证书文件
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \ # 请求头额外前缀
  --requestheader-group-headers=X-Remote-Group \ # 请求头组header
  --requestheader-username-headers=X-Remote-User \ # 请求头用户名header
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \ # 授权kubeconfig文件
  --logtostderr=true \ # 日志输出到stderr
  --v=2 # 日志级别
Restart=always # 总是重新启动
RestartSec=5 # 重启间隔秒数
StartLimitInterval=0 # 启动限制间隔

[Install]
WantedBy=multi-user.target # 安装目标
EOF

# 启动
mkdir -p /data/kube-scheduler
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

查看集群状态

  • k8s-01 上执行
# 生成证书
cd /etc/kubernetes/cert/
cat > admin-csr.json << "EOF"
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "system:masters",
      "OU": "devops"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

# 生成 kubeconfig 文件
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://127.0.0.1:8443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials admin \
  --client-certificate=/etc/kubernetes/cert/admin.pem \
  --client-key=/etc/kubernetes/cert/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context kubernetes --kubeconfig=${KUBE_CONFIG}

# 授权 kubelet-bootstrap 用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --group=system:bootstrappers

# 查看集群状态
# 以下输出说明 Master 节点组件运行正常
kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}  

完成 3 Master 节点部署

  • k8s-01 上执行
# 把相关文件都拷贝到 k8s-02 和 k8s-03
scp -r /etc/kubernetes/* 10.200.0.121:/etc/kubernetes/
scp -r /etc/kubernetes/* 10.200.0.122:/etc/kubernetes/
scp -r /etc/systemd/system/kube-* 10.200.0.121:/etc/systemd/system/
scp -r /etc/systemd/system/kube-* 10.200.0.122:/etc/systemd/system/
scp -r /usr/bin/kube* 10.200.0.121:/usr/bin/
scp -r /usr/bin/kube* 10.200.0.122:/usr/bin/
scp -r /root/.kube 10.200.0.121:/root/
scp -r /root/.kube 10.200.0.122:/root/
  • k8s-02k8s-03 上执行
# 部署 nginx
cd /usr/local/src/ && wget https://nginx.org/download/nginx-1.24.0.tar.gz
tar xvf nginx-1.24.0.tar.gz && cd nginx-1.24.0

./configure --with-stream --without-http --prefix=/usr/local/nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module

make && make install

echo '' > /usr/local/nginx/conf/nginx.conf
vim /usr/local/nginx/conf/nginx.conf
worker_processes  1;

events {
    worker_connections  10240;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 10.200.0.120:6443  max_fails=3 fail_timeout=30s;
        server 10.200.0.121:6443  max_fails=3 fail_timeout=30s;
        server 10.200.0.122:6443  max_fails=3 fail_timeout=30s;
    }
    server {
        listen  8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}

vim /etc/systemd/system/nginx.service
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

systemctl start nginx && systemctl enable nginx

# 修改 kube-apiserver.service
vim /etc/systemd/system/kube-apiserver.service

# k8s-02 改为
--advertise-address=10.200.0.121
--bind-address=10.200.0.121

# k8s-03 改为
--advertise-address=10.200.0.122
--bind-address=10.200.0.122

# 修改 /etc/kubernetes/kube-scheduler.yaml
vim /etc/kubernetes/kube-scheduler.yaml

# k8s-02 改为
metricsBindAddress: 10.200.0.121:10251

# k8s-03 改为
metricsBindAddress: 10.200.0.122:10251

# 修改 kube-scheduler.service
vim /etc/systemd/system/kube-scheduler.service

# k8s-02 改为
--bind-address=10.200.0.121

# k8s-03 改为
--bind-address=10.200.0.122

# 启动服务,k8s-02 和 k8s-02 都执行
mkdir -p /data/{kube-apiserver,kube-controller-manager,kube-scheduler}
systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler
systemctl enable kube-apiserver kube-controller-manager kube-scheduler

至此,Master 3 节点部署完成,接下来部署 Node 节点。


部署 Node 节点

部署 kubeletkube-proxy

我们先把原先的 Master 节点也部署为 Node 节点,以便合理利用资源。

  • k8s-01 上执行
# 生成 kubelet 配置文件
cd /etc/kubernetes

KUBE_APISERVER="https://127.0.0.1:8443"

BOOTSTRAP_TOKEN=$(kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-01 --kubeconfig /root/.kube/config)

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kubelet-bootstrap-k8s-01.kubeconfig

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap-k8s-01.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap-k8s-01.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-01.kubeconfig

mv kubelet-bootstrap-k8s-01.kubeconfig kubelet-bootstrap.kubeconfig

# 编写 kubelet-config.yaml
cat > kubelet-config.yaml << "EOF"
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "10.200.0.120" # kubelet监听的地址
staticPodPath: "" # 静态Pod的路径
syncFrequency: 1m # 同步频率
fileCheckFrequency: 20s # 文件检查频率
httpCheckFrequency: 20s # HTTP检查频率
staticPodURL: "" # 静态Pod的URL
port: 10250 # kubelet的监听端口
readOnlyPort: 0 # kubelet的只读端口(0表示不启用)
rotateCertificates: true # 启用证书轮换
serverTLSBootstrap: true # 启用TLS引导
authentication: # 认证配置
  anonymous:
    enabled: false # 禁用匿名访问
  webhook:
    enabled: true # 启用webhook认证
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem" # 客户端CA证书文件
authorization: # 授权配置
  mode: Webhook # 使用Webhook模式
registryPullQPS: 0 # 镜像拉取的查询频率(0表示不限制)
registryBurst: 20 # 镜像拉取的并发数量
eventRecordQPS: 0 # 事件记录的查询频率(0表示不限制)
eventBurst: 20 # 事件记录的并发数量
enableDebuggingHandlers: true # 启用调试处理程序
enableContentionProfiling: true # 启用争用分析
healthzPort: 10248 # 健康检查端口
healthzBindAddress: "10.200.0.120" # 健康检查绑定地址
clusterDomain: "cluster.local" # 集群域名
clusterDNS: # 集群DNS服务器
  - "10.96.0.2"
nodeStatusUpdateFrequency: 10s # 节点状态更新频率
nodeStatusReportFrequency: 1m # 节点状态报告频率
imageMinimumGCAge: 2m # 镜像垃圾回收的最小年龄
imageGCHighThresholdPercent: 85 # 镜像垃圾回收的高阈值百分比
imageGCLowThresholdPercent: 80 # 镜像垃圾回收的低阈值百分比
volumeStatsAggPeriod: 1m # 卷统计聚合周期
kubeletCgroups: "" # kubelet使用的cgroup
systemCgroups: "" # 系统使用的cgroup
cgroupRoot: "" # cgroup根目录
cgroupsPerQOS: true # 每个QOS使用单独的cgroup
cgroupDriver: cgroupfs # cgroup驱动
runtimeRequestTimeout: 10m # 运行时请求超时
hairpinMode: promiscuous-bridge # hairpin模式
maxPods: 200 # 最大Pod数量
podCIDR: "10.97.0.0/16" # Pod网络范围
podPidsLimit: -1 # 每个Pod的PID限制(-1表示不限制)
resolvConf: /etc/resolv.conf # DNS解析配置文件
maxOpenFiles: 1000000 # 最大打开文件数量
kubeAPIQPS: 1000 # 对API server的QPS
kubeAPIBurst: 2000 # 对API server的并发量
serializeImagePulls: false # 镜像拉取串行化
evictionHard: # 硬性驱逐阈值
  memory.available: "100Mi"
  nodefs.available: "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "15%"
evictionSoft: {} # 软性驱逐阈值
enableControllerAttachDetach: true # 启用控制器附加/分离
failSwapOn: true # 当swap开启时kubelet启动失败
containerLogMaxSize: 20Mi # 容器日志最大大小
containerLogMaxFiles: 10 # 容器日志文件最大数量
systemReserved: {} # 系统预留资源
kubeReserved: {} # kubelet预留资源
systemReservedCgroup: "" # 系统预留资源的cgroup
kubeReservedCgroup: "" # kubelet预留资源的cgroup
enforceNodeAllocatable: ["pods"] # 强制节点可分配的资源类型
EOF

# 编写 kubelet 启动文件
cat > /etc/systemd/system/kubelet.service << "EOF"
[Unit]
Description=Kubernetes Kubelet # 服务描述:Kubernetes 的 Kubelet 组件
Documentation=https://github.com/GoogleCloudPlatform/kubernetes # 文档链接
After=docker.service # 在docker服务之后启动
Requires=docker.service # 需要docker服务

[Service]
WorkingDirectory=/data/kubelet # 工作目录设置为/data/kubelet
ExecStart=/usr/bin/kubelet \ # 启动命令行
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig # 引导kubeconfig的路径
  --cert-dir=/etc/kubernetes/cert # 证书目录
  --container-runtime=docker # 容器运行时为docker
  --container-runtime-endpoint=unix:///var/run/dockershim.sock # 容器运行时的endpoint
  --root-dir=/data/kubelet # kubelet的root目录
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig # kubelet的kubeconfig配置文件
  --config=/etc/kubernetes/kubelet-config.yaml # kubelet的配置文件
  --hostname-override=k8s-01 # 主机名覆盖
  --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/abcdocker/pause-amd64:3.1 # pod基础设施容器镜像
  --image-pull-progress-deadline=15m # 拉取镜像进度的截止时间
  --volume-plugin-dir=/data/kubelet/kubelet-plugins/volume/exec/ # 卷插件目录
  --logtostderr=true # 日志输出到stderr
  --v=2 # 日志级别
Restart=always # 总是重新启动
RestartSec=5 # 重启时间间隔5秒
StartLimitInterval=0 # 启动限制间隔为0,代表没有限制

[Install]
WantedBy=multi-user.target # 配置为多用户目标下需要的服务
EOF

# 生成 kube-proxy 证书文件
cat > /etc/kubernetes/cert/kube-proxy-csr.json << "EOF"
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "devops",
      "OU": "System"
    }
  ]
}
EOF

cd cert/ && cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 编写 kube-proxy.yaml
cat > /etc/kubernetes/kube-proxy.yaml << "EOF"
kind: KubeProxyConfiguration # 文件类型:KubeProxy 配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1 # API 版本
clientConnection: # 客户端连接配置
  burst: 4000 # 客户端突发请求的最大数量
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig" # kube-proxy 使用的 kubeconfig 文件路径
  qps: 1000 # 每秒查询数(QPS)
bindAddress: 10.200.0.120 # kube-proxy 绑定的 IP 地址
healthzBindAddress: 10.200.0.120:10256 # 健康检查服务绑定的地址和端口
metricsBindAddress: 10.200.0.120:10249 # 指标服务绑定的地址和端口
enableProfiling: true # 是否开启性能分析
clusterCIDR: 10.97.0.0/16 # 集群内 Pod 的 IP 地址范围
hostnameOverride: k8s-01 # 主机名覆盖
mode: "ipvs" # kube-proxy 运行模式,这里设置为 ipvs
portRange: "" # 服务端口范围,留空表示不限制
kubeProxyIPTablesConfiguration: # iptables 配置
  masqueradeAll: false # 是否伪装所有出口流量
kubeProxyIPVSConfiguration: # ipvs 配置
  scheduler: wrr # 负载均衡算法,这里使用加权轮询(wrr)
  excludeCIDRs: [] # 排除的 IP 地址范围列表,留空表示不排除
EOF

# 生成 kube-proxy.kubeconfig
KUBE_CONFIG="/etc/kubernetes/kube-proxy.kubeconfig"
KUBE_APISERVER="https://127.0.0.1:8443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/cert/kube-proxy.pem \
  --client-key=/etc/kubernetes/cert/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# 编写 kube-proxy 启动文件
cat > /etc/systemd/system/kube-proxy.service << "EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server # 服务描述:Kubernetes Kube-Proxy 服务器
Documentation=https://github.com/GoogleCloudPlatform/kubernetes # 文档链接
After=network.target # 指定该服务在网络服务之后启动

[Service]
WorkingDirectory=/data/kube-proxy # 指定工作目录为 /data/kube-proxy
ExecStart=/usr/bin/kube-proxy \ # 启动命令
  --config=/etc/kubernetes/kube-proxy.yaml \ # 指定 kube-proxy 的配置文件
  --logtostderr=true \ # 日志输出到标准错误
  --v=2 # 日志级别
Restart=on-failure # 失败时重启策略
RestartSec=5 # 重启间隔时间为 5 秒
LimitNOFILE=65536 # 打开文件描述符的最大数量

[Install]
WantedBy=multi-user.target # 指定安装目标为多用户目标
EOF

# 启动 kubelet
mkdir -p /data/{kubelet,kube-proxy}
systemctl daemon-reload
systemctl start kubelet.service kube-proxy.service
systemctl enable kubelet.service kube-proxy.service

# 授权证书
kubectl get csr
NAME        AGE   REQUESTOR                 CONDITION
csr-jwlqj   36s   system:bootstrap:74j579   Pending

kubectl certificate approve csr-jwlqj

kubectl get csr
NAME        AGE   REQUESTOR                 CONDITION
csr-jwlqj   90s   system:bootstrap:74j579   Approved,Issued
csr-m5zbn   13s   system:node:k8s-01        Pending

kubectl certificate approve csr-m5zbn

# 查看节点状态
kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
k8s-01   Ready    <none>   105s   v1.17.16

# 生成 k8s-02 节点配置文件
cd /etc/kubernetes

KUBE_APISERVER="https://127.0.0.1:8443"

BOOTSTRAP_TOKEN=$(kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-02 --kubeconfig /root/.kube/config)

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kubelet-bootstrap-k8s-02.kubeconfig

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap-k8s-02.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap-k8s-02.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-02.kubeconfig

# 生成 k8s-03 节点配置文件
cd /etc/kubernetes

KUBE_APISERVER="https://127.0.0.1:8443"

BOOTSTRAP_TOKEN=$(kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-02 --kubeconfig /root/.kube/config)

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kubelet-bootstrap-k8s-02.kubeconfig

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap-k8s-02.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap-k8s-02.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-02.kubeconfig

# 传送相关文件到 k8s-02 和 k8s-03
scp -r kubelet-bootstrap-k8s-02.kubeconfig 10.200.0.121:/etc/kubernetes/
scp -r kubelet-config.yaml 10.200.0.121:/etc/kubernetes/
scp -r kube-proxy.* 10.200.0.121:/etc/kubernetes/
scp -r cert/kube-proxy* 10.200.0.121:/etc/kubernetes/cert/
scp -r /etc/systemd/system/kubelet.service 10.200.0.121:/etc/systemd/system/
scp -r /etc/systemd/system/kube-proxy.service 10.200.0.121:/etc/systemd/system/

scp -r kubelet-bootstrap-k8s-03.kubeconfig 10.200.0.122:/etc/kubernetes/
scp -r kubelet-config.yaml 10.200.0.122:/etc/kubernetes/
scp -r kube-proxy.* 10.200.0.122:/etc/kubernetes/
scp -r cert/kube-proxy* 10.200.0.122:/etc/kubernetes/cert/
scp -r /etc/systemd/system/kubelet.service 10.200.0.122:/etc/systemd/system/
scp -r /etc/systemd/system/kube-proxy.service 10.200.0.122:/etc/systemd/system/
  • k8s-02k8s-03 上执行
# 修改配置配置文件
# k8s-02 执行
mv /etc/kubernetes/kubelet-bootstrap-k8s-02.kubeconfig /etc/kubernetes/kubelet-bootstrap.kubeconfig

vim /etc/kubernetes/kubelet-config.yaml
address: "10.200.0.120"
healthzBindAddress: "10.200.0.120"
改为
address: "10.200.0.121"
healthzBindAddress: "10.200.0.121"

vim /etc/kubernetes/kube-proxy.yaml
bindAddress: 10.200.0.120
healthzBindAddress: 10.200.0.120:10256
metricsBindAddress: 10.200.0.120:10249
hostnameOverride: k8s-01
改为
bindAddress: 10.200.0.121
healthzBindAddress: 10.200.0.121:10256
metricsBindAddress: 10.200.0.121:10249
hostnameOverride: k8s-02

vim /etc/systemd/system/kubelet.service
--hostname-override=k8s-01
改为
--hostname-override=k8s-02

# k8s-03 执行
mv /etc/kubernetes/kubelet-bootstrap-k8s-03.kubeconfig /etc/kubernetes/kubelet-bootstrap.kubeconfig

vim /etc/kubernetes/kubelet-config.yaml
address: "10.200.0.120"
healthzBindAddress: "10.200.0.120"
改为
address: "10.200.0.122"
healthzBindAddress: "10.200.0.122"

vim /etc/kubernetes/kube-proxy.yaml
bindAddress: 10.200.0.120
healthzBindAddress: 10.200.0.120:10256
metricsBindAddress: 10.200.0.120:10249
hostnameOverride: k8s-01
改为
bindAddress: 10.200.0.122
healthzBindAddress: 10.200.0.122:10256
metricsBindAddress: 10.200.0.122:10249
hostnameOverride: k8s-03

vim /etc/systemd/system/kubelet.service
--hostname-override=k8s-01
改为
--hostname-override=k8s-03

# k8s-02 和 k8s-02 执行
mkdir -p /data/{kubelet,kube-proxy}
systemctl daemon-reload
systemctl start kubelet.service kube-proxy.service
systemctl enable kubelet.service kube-proxy.service
  • k8s-01 上执行
# 授权证书
kubectl get csr
NAME        AGE   REQUESTOR                 CONDITION
csr-gzvkk   5s    system:bootstrap:9cdfo9   Pending
csr-r74kl   88s   system:bootstrap:zuqjgo   Pending

kubectl certificate approve csr-gzvkk csr-r74kl
certificatesigningrequest.certificates.k8s.io/csr-gzvkk approved
certificatesigningrequest.certificates.k8s.io/csr-r74kl approved

kubectl get csr
NAME        AGE     REQUESTOR                 CONDITION
csr-c88nm   7s      system:node:k8s-03        Pending
csr-g4f6g   7s      system:node:k8s-02        Pending
csr-gzvkk   50s     system:bootstrap:9cdfo9   Approved,Issued
csr-r74kl   2m13s   system:bootstrap:zuqjgo   Approved,Issued

kubectl certificate approve csr-c88nm csr-g4f6g
certificatesigningrequest.certificates.k8s.io/csr-c88nm approved
certificatesigningrequest.certificates.k8s.io/csr-g4f6g approved

# 检查节点状态
kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
k8s-01   Ready    <none>   150m   v1.17.16
k8s-02   Ready    <none>   2m6s   v1.17.16
k8s-03   Ready    <none>   2m7s   v1.17.16

至此,Master3 个节点已经同时部署为 Node 节点,接下来把 k8s-04k8s-05 也部署为 Node 节点。

  • k8s-04k8s-05 上执行
# 创建对应目录
mkdir -p /etc/kubernetes/cert/
mkdir -p /data/{kubelet,kube-proxy}
  • k8s-01 上执行
# 生成 k8s-04 节点配置文件
cd /etc/kubernetes

KUBE_APISERVER="https://127.0.0.1:8443"

BOOTSTRAP_TOKEN=$(kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-04 --kubeconfig /root/.kube/config)

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kubelet-bootstrap-k8s-04.kubeconfig

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap-k8s-04.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap-k8s-04.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-04.kubeconfig

# 生成 k8s-05 节点配置文件
cd /etc/kubernetes

KUBE_APISERVER="https://127.0.0.1:8443"

BOOTSTRAP_TOKEN=$(kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:k8s-05 --kubeconfig /root/.kube/config)

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kubelet-bootstrap-k8s-05.kubeconfig

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap-k8s-05.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap-k8s-05.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-05.kubeconfig

# 传送相关文件到 k8s-04 和 k8s-05
scp -r /etc/kubernetes/cert/{ca.pem,ca-key.pem,flanneld.pem,flanneld-key.pem} 10.200.0.123:/etc/kubernetes/cert/
scp -r /etc/kubernetes/{kubelet-config.yaml,kube-proxy.kubeconfig,kube-proxy.yaml,kubelet-bootstrap-k8s-04.kubeconfig} 10.200.0.123:/etc/kubernetes/
scp -r /etc/docker/daemon.json 10.200.0.123:/etc/docker/
scp -r /etc/systemd/system/{flanneld.service,kubelet.service,kube-proxy.service} 10.200.0.123:/etc/systemd/system/
scp -r /usr/bin/{flanneld,mk-docker-opts.sh,kubelet,kube-proxy} 10.200.0.123:/usr/bin/
scp -r /usr/local/src/nginx-1.24.0.tar.gz 10.200.0.123:/usr/local/src/

scp -r /etc/kubernetes/cert/{ca.pem,ca-key.pem,flanneld.pem,flanneld-key.pem} 10.200.0.124:/etc/kubernetes/cert/
scp -r /etc/kubernetes/{kubelet-config.yaml,kube-proxy.kubeconfig,kube-proxy.yaml,kubelet-bootstrap-k8s-05.kubeconfig} 10.200.0.124:/etc/kubernetes/
scp -r /etc/docker/daemon.json 10.200.0.124:/etc/docker/
scp -r /etc/systemd/system/{flanneld.service,kubelet.service,kube-proxy.service} 10.200.0.124:/etc/systemd/system/
scp -r /usr/bin/{flanneld,mk-docker-opts.sh,kubelet,kube-proxy} 10.200.0.124:/usr/bin/
scp -r /usr/local/src/nginx-1.24.0.tar.gz 10.200.0.124:/usr/local/src/
  • k8s-04k8s-05 上执行
# k8s-04 执行
vim /etc/systemd/system/flanneld.service
此处注意 -iface= 参数,需要改为实际的网卡名称

vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
改为
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS

vim /etc/systemd/system/kubelet.service
--hostname-override=k8s-01
改为
--hostname-override=k8s-04

vim /etc/kubernetes/kubelet-config.yaml
address: "10.200.0.120"
healthzBindAddress: "10.200.0.120"
改为
address: "10.200.0.123"
healthzBindAddress: "10.200.0.123"

vim /etc/kubernetes/kube-proxy.yaml
bindAddress: 10.200.0.120
healthzBindAddress: 10.200.0.120:10256
metricsBindAddress: 10.200.0.120:10249
hostnameOverride: k8s-01
改为
bindAddress: 10.200.0.123
healthzBindAddress: 10.200.0.123:10256
metricsBindAddress: 10.200.0.123:10249
hostnameOverride: k8s-04

mv /etc/kubernetes/kubelet-bootstrap-k8s-04.kubeconfig /etc/kubernetes/kubelet-bootstrap.kubeconfig

# k8s-05 执行
vim /etc/systemd/system/flanneld.service
此处注意 -iface= 参数,需要改为实际的网卡名称

vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
改为
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS

vim /etc/systemd/system/kubelet.service
--hostname-override=k8s-01
改为
--hostname-override=k8s-05

vim /etc/kubernetes/kubelet-config.yaml
address: "10.200.0.120"
healthzBindAddress: "10.200.0.120"
改为
address: "10.200.0.124"
healthzBindAddress: "10.200.0.124"

vim /etc/kubernetes/kube-proxy.yaml
bindAddress: 10.200.0.120
healthzBindAddress: 10.200.0.120:10256
metricsBindAddress: 10.200.0.120:10249
hostnameOverride: k8s-01
改为
bindAddress: 10.200.0.124
healthzBindAddress: 10.200.0.124:10256
metricsBindAddress: 10.200.0.124:10249
hostnameOverride: k8s-05

mv /etc/kubernetes/kubelet-bootstrap-k8s-05.kubeconfig /etc/kubernetes/kubelet-bootstrap.kubeconfig

# 部署 nginx
cd /usr/local/src/ && tar xvf nginx-1.24.0.tar.gz && cd nginx-1.24.0

./configure --with-stream --without-http --prefix=/usr/local/nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module

make && make install

echo '' > /usr/local/nginx/conf/nginx.conf
vim /usr/local/nginx/conf/nginx.conf
worker_processes  1;

events {
    worker_connections  10240;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 10.200.0.120:6443  max_fails=3 fail_timeout=30s;
        server 10.200.0.121:6443  max_fails=3 fail_timeout=30s;
        server 10.200.0.122:6443  max_fails=3 fail_timeout=30s;
    }
    server {
        listen  8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}

vim /etc/systemd/system/nginx.service
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

systemctl start nginx && systemctl enable nginx

# k8s-04 和 k8s-05 执行
systemctl daemon-reload
systemctl start flanneld.service && systemctl enable flanneld.service
systemctl restart docker && systemctl enable docker
systemctl start kubelet.service kube-proxy.service
systemctl enable kubelet.service kube-proxy.service
  • k8s-01 上执行
# 授权证书
kubectl get csr
NAME        AGE   REQUESTOR                 CONDITION
csr-jpcbb   12s   system:bootstrap:dbvt9f   Pending
csr-tdcmb   18s   system:bootstrap:oo39ch   Pending

kubectl certificate approve csr-jpcbb csr-tdcmb
certificatesigningrequest.certificates.k8s.io/csr-jpcbb approved
certificatesigningrequest.certificates.k8s.io/csr-tdcmb approved

kubectl get csr
NAME        AGE    REQUESTOR                 CONDITION
csr-f9r5v   18s    system:node:k8s-05        Pending
csr-xvj9n   17s    system:node:k8s-04        Pending

kubectl certificate approve csr-f9r5v csr-xvj9n
certificatesigningrequest.certificates.k8s.io/csr-f9r5v approved
certificatesigningrequest.certificates.k8s.io/csr-xvj9n approved

# 检查节点状态
kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
k8s-01   Ready    <none>   3h14m   v1.17.16
k8s-02   Ready    <none>   45m     v1.17.16
k8s-03   Ready    <none>   45m     v1.17.16
k8s-04   Ready    <none>   68s     v1.17.16
k8s-05   Ready    <none>   69s     v1.17.16

至此,整个 K8S 集群,5 个节点,3Master2Node 架构部署完成!


部署 CoreDNS

CoreDNS 用于集群内部 Service 名称解析。

  • k8s-01 上执行
# 编写配置文件,注意修改注释位置,具体看你的配置
cd /etc/kubernetes/
cat > coredns.yaml << "EOF"
# 创建一个名为 coredns 的 ServiceAccount,在 kube-system 命名空间内
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
# 创建一个 ClusterRole,提供 coredns 需要的权限,如列出和监视 endpoints、services、pods 等资源
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
      - ""
    resources:
      - endpoints
      - services
      - pods
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - discovery.k8s.io
    resources:
      - endpointslices
    verbs:
      - list
      - watch
---
# 将前面创建的 ClusterRole 绑定到 coredns ServiceAccount 上
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
  - kind: ServiceAccount
    name: coredns
    namespace: kube-system
---
# 创建一个 ConfigMap,存放 CoreDNS 的配置文件 Corefile
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
# 创建一个 Deployment,部署 CoreDNS 应用,包括指定镜像、资源限制、安全策略等配置
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: k8s-app
                      operator: In
                      values: ["kube-dns"]
                topologyKey: kubernetes.io/hostname
      containers:
        - name: coredns
          image: coredns/coredns:1.8.4
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              memory: 170Mi
            requests:
              cpu: 100m
              memory: 70Mi
          args: ["-conf", "/etc/coredns/Corefile"]
          volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
              readOnly: true
          ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              add:
                - NET_BIND_SERVICE
              drop:
                - all
            readOnlyRootFilesystem: true
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 60
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: /ready
              port: 8181
              scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
              - key: Corefile
                path: Corefile
---
# 创建一个 Service,定义如何访问 CoreDNS,包括服务的端口、协议等信息
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  # 这里修改为实际 clusterDNS 地址,在 kubelet 配置文件 kubelet-config.yaml 做了定义了
  clusterIP: 10.96.0.2
  ports:
    - name: dns
      port: 53
      protocol: UDP
    - name: dns-tcp
      port: 53
      protocol: TCP
    - name: metrics
      port: 9153
      protocol: TCP
EOF

# 部署
kubectl apply -f coredns.yaml

# 查看状态
kubectl get pods -n kube-system
NAME                      READY   STATUS    RESTARTS   AGE
coredns-cfc5dfc45-sg28m   1/1     Running   0          48s
coredns-cfc5dfc45-sqvpw   1/1     Running   0          48s
coredns-cfc5dfc45-zncxz   1/1     Running   0          48s

# 测试一下
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
/ # nslookup kubernetes
Server:    10.96.0.2
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

至此,CoreDNS 部署完成,测试 DNS 解析也没问题!


Tips

kubectl logs 提示 Error from server (Forbidden)

  • k8s-01 上执行
kubectl create clusterrolebinding kubernetes --clusterrole=cluster-admin --user=kubernetes

etcdctl 查看 pod 子网信息

  • k8s-01 上执行
etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem --endpoints="https://10.200.0.120:2379,https://10.200.0.121:2379,https://10.200.0.122:2379" get /kubernetes/network/config

etcdctl 查看 etcd 集群健康状况

etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem --endpoints="https://10.200.0.120:2379,https://10.200.0.121:2379,https://10.200.0.122:2379" cluster-health

kubectl top 问题解决

# 问题
kubectl top nodes
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

# 解决
cd /usr/local/src/ && wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.2/components.yaml

mv components.yaml metrics-server_v0.5.2.yaml

vim metrics-server_v0.5.2.yaml 
# image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
# 修改镜像为dockerhub地址
image: bitnami/metrics-server:0.5.2

kubectl apply -f metrics-server_v0.5.2.yaml

# 看效果
kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-01   166m         4%     932Mi           11%       
k8s-02   119m         2%     721Mi           9%        
k8s-03   118m         2%     706Mi           9%        
k8s-04   56m          1%     314Mi           3%        
k8s-05   40m          1%     270Mi           3%

etcd 数据备份与恢复

# 备份
ETCDCTL_API=3 etcdctl --debug \ 
  --endpoints=https://10.200.0.120:2379 \
  --cacert=/etc/kubernetes/cert/ca.pem \
  --cert=/etc/kubernetes/cert/etcd.pem \
  --key=/etc/kubernetes/cert/etcd-key.pem \
  snapshot save [备份文件名]

# 恢复
# --name、--initial-cluster、--initial-cluster-token、--initial-advertise-peer-urls 参考启动文件的配置
# 新的数据目录不需要手工创建,会在 etcd 启动时自动创建
# 先把 etcd 集群停止运行 systemctl stop etcd.service
# 然后执行恢复命令
ETCDCTL_API=3 etcdctl snapshot restore [备份文件名] \
  --name=k8s-01 \
  --cacert=/etc/kubernetes/cert/ca.pem \
  --cert=/etc/kubernetes/cert/etcd.pem \
  --key=/etc/kubernetes/cert/etcd-key.pem \
  --data-dir=[新的数据目录] \
  --initial-cluster=k8s-01=https://10.200.0.120:2380,k8s-02=https://10.200.0.121:2380,k8s-03=https://10.200.0.122:2380 \
  --initial-cluster-token=etcd-cluster \
  --initial-advertise-peer-urls=https://10.200.0.120:2380
 
# 修改启动文件
vim /etc/systemd/system/etcd.service
--data-dir=[新的数据目录]

# 启动 etcd 集群
systemctl daemon-reload
systemctl start etcd.service

# 重启其他服务
systemctl restart kube-apiserver.service kube-controller-manager.service kube-scheduler.service kubelet.service kube-proxy.service

文章作者: Runfa Li
版权声明: 本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 Linux 小白鼠
Kubernetes Linux Linux docker Centos7 Centos7 docker kubernetes k8s kubeadm master node 二进制部署 kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy etcd flanneld
觉得文章不错,打赏一点吧,1分也是爱~
打赏
微信 微信
支付宝 支付宝