使用 kubeadm 搭建 v1.16.4 版本 Kubernetes 高可用集群

发布时间:2020-01-12 11:57:05阅读:(304)

本文将通过Kubeadm来搭建Kubernetes v1.16.4高可用集群,使用calico插件以及ipvs模式的kube-proxy

环境准备

centos虚拟机6台,配置清单:

HostIP角色系统版本配置
m110.0.40.1Mastercentos-7.72C4G
m210.0.40.2Mastercentos-7.72C4G
m310.0.40.3Mastercentos-7.72C4G
w110.0.40.4Workercentos-7.72C4G
w210.0.40.5Workercentos-7.72C4G
w310.0.40.6Workercentos-7.72C4G

以下操作需要在每一条机器上操作

配置host

$ cat /etc/hosts <<EOF
10.0.40.1 m1
10.0.40.2 m2
10.0.40.3 m3
10.0.40.4 w1
10.0.40.5 w2
10.0.40.6 w3
EOF

关闭firewalld,Selinux

$ systemctl stop firewalld
$ systemctl disable firewalld
$ vi /etc/selinux/config
SELINUX=disabled # 改为disabled

重置iptables

$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

关闭swap分区

swapoff -a 
$ sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

关闭dnsmasq

$ service dnsmasq stop && systemctl disable dnsmasq

更新软件包

$ yum update

设置内核参数

$ cat > /etc/sysctl.d/k8s.conf <<EOF 
net.ipv4.ip_nonlocal_bind = 1 # 这一行是为了haproxy配置的,所以只需要在master节点配置即可,如果没有这个配置haproxy可能会无法启动
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF

执行如下命令使配置生效

$ modprobe br_netfilter 
$ sysctl -p /etc/sysctl.d/k8s.conf

安装ipvs

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF 
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

以上脚本创建了/etc/sysconfig/modules/ipvs.modules文件,保证系统重启后能自动加载ipvs模块,使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令检查模块是否已正确加载。

安装ipset软件包以及ipvsadm方便查看ipvs规则

$ yum install -y ipset ipvsadm

安装docker

yum remove -y docker* container-selinux # 移除老的安装包
$ yum install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ yum list docker-ce --showduplicates | sort -r
* updates: mirrors.aliyun.com
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.5-3.el7 @docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
......
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
......

v16.4官方建议最新docker版本为18.09,我们就安装这个版本

$ yum install docker-ce-18.09.9-3.el7 -y

配置docker镜像以及存储目录

$ cat > /etc/docker/daemon.json <<EOF 
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["http://hub-mirror.c.163.com"] ,
  "graph": "/data/docker"
}
EOF

这里配置了163的镜像加速,我的系统中/data目录空间比较大,因此将docker的文件目录设在了/data/docker下。

启动docker并设为开机自启

$ systemctl start docker 
$ systemctl enable docker

安装kubeadm,这里使用阿里的镜像

$ cat /etc/yum.repos.d/kubernetes.repo <<EOF 
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

$ yum install -y kubeadm-1.16.4-0 kubelet-1.16.4-0 kubectl-1.16.4-0 --disableexcludes=kubernetes
$ systemctl enable kubelet && systemctl start kubelet

任意选择两个Master节点安装Haproxy和keepalived

$ yum install keepalived haproxy -y

主keepalived配置,这里设置了两个VIP,一个用于apiserver,另一个用于ingress,也可以只配一个

$ cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived


vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 3
        weight -20
}


vrrp_instance K8S {
    state MASTER
    interface ens192
    virtual_router_id 50
    priority 200
    advert_int 5
    virtual_ipaddress {
        10.0.40.50
        10.0.40.60
    }
    track_script {
        check_haproxy
   }
}

备keepalived配置,主要区别就是初始权重低于主节点,状态为backup,特别注意:vritual_router_id必须设为一致

$ cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived


vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 3
        weight -20
}


vrrp_instance K8S {
    state backup
    interface ens192
    virtual_router_id 50
    priority 190
    advert_int 5
    virtual_ipaddress {
        10.0.40.50
        10.0.40.60
    }
    track_script {
        check_haproxy
   }
}

检测脚本

$ cat /etc/keepalived/check_haproxy.sh 
#!/bin/bash
active_status=`netstat -lntp|grep haproxy|wc -l`
if [ $active_status -gt 0 ]; then
    exit 0
else
    exit 1
fi

配置haproxy

$ cat /etc/haproxy/haproxy.cfg 
global
    …
defaults
    mode    tcp  # 修改为四层代理
    …

frontend main 10.0.40.50:16443 # vip及端口,注意:端口不能和apiserver的端口一致
    default_backend            k8s-master

backend k8s-master
    mode        tcp
    balance     roundrobin
    server  m1 10.0.40.1:6443 check # 3台master节点
    server  m2 10.0.40.2:6443 check
    server  m3 10.0.40.3:6443 check

初始化集群

任选一个master节点完成以下操作,这里我选m1(10.0.40.1)节点

先导出kubeadm的默认配置

$ kubeadm config print init-defaults > kubeadm.yaml

修改默认配置

$ cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.40.1 # 修改为本机地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: m1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改镜像仓库为阿里云
kind: ClusterConfiguration
kubernetesVersion: v1.16.4 # 安装的版本
controlPlaneEndpoint: 10.0.40.50:16443   # 添加这一行,地址为vip地址,如果没有这一行安装完成是一个单机版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.101.0.0/16  # 修改service ip段
  podSubnet: 10.100.0.0/16    # 修改pod ip段
scheduler: {}
---  # 添加这一段,配置kubeproy使用ipvs模式
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

初始化第一个节点

$ kubeadm init --config kubeadm.yaml --upload-certs
[init] Using Kubernetes version: v1.16.4
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
......
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
......
You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 10.0.40.50:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:780bc30c556cc39fa8f7ac8e92c7032e2a6711c8cd34e1d53678c256d9ecd89e \
--control-plane --certificate-key 5f3e42fb064cc3aac7a7d25ea02bf58f3b7a537aa6b9f5ae20ff81f1e90551fc
......
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.40.50:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:780bc30c556cc39fa8f7ac8e92c7032e2a6711c8cd34e1d53678c256d9ecd89e

按照提示拷贝配置文件

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

记得记录一下上面最后输出的两个kubeadm join命令,分别是用来添加master节点和worker节点的,当然如果忘记了也可以通过kubeadm重新生成

 # 生成join命令
$ kubeadm init phase upload-certs --upload-certs
I0112 11:36:44.869100 7566 version.go:251] remote version is much newer: v1.17.0; falling back to: stable-1.16
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
cd42efb023bf5483387cb81f5785b71c55407491f423d6bd94cd45602f5b3255 # 记住这个key
$ kubeadm token create --print-join-command
kubeadm join 10.0.40.50:16443 --token tisycu.u8fj9y8odeq86s5j --discovery-token-ca-cert-hash sha256:780bc30c556cc39fa8f7ac8e92c7032e2a6711c8cd34e1d53678c256d9ecd89e # 默认生成的是node节点的join命令,如果需要加入master节点,执行在后面添加 --control-plane --certificate-key ${上面获取的key}

添加节点

在master节点和worker节点上分别执行相应的join命令,master执行完成后需要按提示复制一个配置文件

执行完成后查看一下个节点状态

$ kubectl get nodes
NAME   STATUS   ROLES    AGE      VERSION
m1     NotReady    master   12m     v1.16.4
m2     NotReady    master   2m15s   v1.16.4
m3     NotReady    master   2m22s   v1.16.4
w1     NotReady    <none>   35s    v1.16.4
w2     NotReady    <none>   75s     v1.16.4
w3     NotReady    <none>   74s     v1.16.4

可以看到个节点都已经进来了,但是状态是NotReady,因为网络插件还没安装

安装网络插件(calico)

先从官网下载配置文件

$ wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml

修改部分配置

$ vi calico.yaml 

spec:
  containers:
  - env:
- name: CALICO_IPV4POOL_IPIP
value: "off" # 这里修改为off,关闭ipip模式
- name: FELIX_IPINIPENABLED # 添加这个配置,关闭felix_ipinip
value: "false"
    - name: DATASTORE_TYPE
      value: kubernetes
    - name: IP_AUTODETECTION_METHOD  # DaemonSet中添加该环境变量
      value: interface=ens192    # 指定内网网卡, 也可以这样配:value: can-reach=www.baidu.com
    - name: WAIT_FOR_DATASTORE
      value: “true"
    …
    - name: CALICO_IPV4POOL_CIDR
      value: "10.100.0.0/16"    # 指定pod ip段

安装插件

$ kubectl apply -f calico.yaml

过几分钟再检查集群状态

$ kubectl get nodes
NAME   STATUS   ROLES    AGE     VERSION
m1     Ready    master   29m     v1.16.4
m2     Ready    master   17m    v1.16.4
m3     Ready    master   17m     v1.16.4
w1     Ready    <none>   16m    v1.16.4
w2     Ready    <none>   16m     v1.16.4
w3     Ready    <none>   16m     v1.16.4
$ kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5c45f5bd9f-zgmb2   1/1     Running   0          3m38s
calico-node-8lbvz                          1/1     Running   0          3m39s
calico-node-9hs6s                          1/1     Running   0          3m39s
calico-node-ddzdm                          1/1     Running   0          3m39s
calico-node-j97nv                          1/1     Running   0          2m29s
calico-node-jfd8c                          1/1     Running   0          2m30s
calico-node-m25lk                          1/1     Running   0          2m4s
coredns-58cc8c89f4-clg9z                   1/1     Running   0          30m
coredns-58cc8c89f4-w9hc5                   1/1     Running   0          30m
etcd-m1                                    1/1     Running   0          29m
etcd-m2                                    1/1     Running   0          9m31s
etcd-m3                                    1/1     Running   0          2m3s
kube-apiserver-m1                          1/1     Running   0          29m
kube-apiserver-m2                          1/1     Running   0          9m31s
kube-apiserver-m3                          1/1     Running   0          2m4s
kube-controller-manager-m1                 1/1     Running   1          29m
kube-controller-manager-m2                 1/1     Running   0          9m31s
kube-controller-manager-m3                 1/1     Running   0          2m3s
kube-proxy-hm6k2                           1/1     Running   0          2m4s
kube-proxy-lrn2f                           1/1     Running   0          2m29s
kube-proxy-m5smk                           1/1     Running   0          9m31s
kube-proxy-n7ch4                           1/1     Running   0          2m30s
kube-proxy-r76xn                           1/1     Running   0          9m45s
kube-proxy-zh77w                           1/1     Running   0          30m
kube-scheduler-m1                          1/1     Running   1          29m
kube-scheduler-m2                          1/1     Running   0          9m31s
kube-scheduler-m3                          1/1     Running   0          2m3s

可以看到各个节点都变成了Ready状态,pod也都运行起来了

检查一下kube-proxy的网络模式

$ kubectl logs kube-proxy-hm6k2 -n kube-system 
I0111 19:08:41.357170       1 node.go:135] Successfully retrieved node IP: 10.0.40.1
I0111 19:08:41.357701       1 server_others.go:176] Using ipvs Proxier.
W0111 19:08:41.358197       1 proxier.go:420] IPVS scheduler not specified, use rr by default
I0111 19:08:41.358739       1 server.go:529] Version: v1.16.0

看到Using ipvs Proxier. 说明kube-proxy的ipvs 开启成功!

标签:k8sipvs

发表评论

评论列表(有0条评论304人围观)
暂无评论