二进制方式搭建Kubernetes 1.19.3高可用集群(三)——部署controller-manager和scheduler

发布时间:2020-11-02 22:34:29阅读:(571)

本文将介绍通过二进制方式部署controller-manager和scheduler组件

部署kubectl(master节点)

kubectl是kubernetes集群的命令行管理工具,它默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息。

生成admin证书和私钥

kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。
kubectl 作为集群的管理工具,需要被授予最高权限。这里创建具有最高权限的 admin 证书。

#新建一个目录存放admin的证书
cd target && mkdir admin && cd admin
cat > admin-csr.json<<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hangzhou",
"L": "Hangzhou",
"O": "system:masters",
"OU": "System"
}
]
}
EOF

#使用ca证书生成证书、私钥
cfssl gencert -ca=../ca.pem -ca-key=../ca-key.pem -config=../ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

创建kubeconfig配置文件

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书

#将master文件夹下的kubectl复制到bin目录下
cp .../master/kubectl /usr/local/bin/

#设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=../ca.pem --embed-certs=true --server=https://10.0.50.254:6443 --kubeconfig=kube.config

#设置客户端认证参数
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

#设置上下文参数
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config

#设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kube.config

#准备环境
ssh root@10.0.50.101 "mkdir -p ~/.kube && cp /opt/kubernetes/bin/kubectl /usr/local/bin/"
ssh root@10.0.50.102 "mkdir -p ~/.kube && cp /opt/kubernetes/bin/kubectl /usr/local/bin/"
ssh root@10.0.50.103 "mkdir -p ~/.kube && cp /opt/kubernetes/bin/kubectl /usr/local/bin/"

#分发配置文件
scp kube.config root@10.0.50.101:~/.kube/config
scp kube.config root@10.0.50.102:~/.kube/config
scp kube.config root@10.0.50.103:~/.kube/config

授予 kubernetes 证书访问 kubelet API 的权限(在master节点上执行)

在执行 kubectl exec、run、logs 等命令时,apiserver 会转发到 kubelet。这里定义 RBAC 规则,授权 apiserver 调用 kubelet API。

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

测试(在master节点上执行)

#查看集群信息
kubectl cluster-info
kubectl get all --all-namespaces
kubectl get componentstatuses

部署controller-manager(master节点)

controller-manager启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

创建证书和私钥

#新建一个目录存放证书
cd target && mkdir controller-manager && cd controller-manager
cat > controller-manager-csr.json<<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"10.0.50.101",
"10.0.50.102",
"10.0.50.103"
],
"names": [
{
"C": "CN",
"ST": "Hangzhou",
"L": "Hangzhou",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}
EOF

#使用ca证书生成证书、私钥
cfssl gencert -ca=../ca.pem -ca-key=../ca-key.pem -config=../ca-config.json -profile=kubernetes controller-manager-csr.json | cfssljson -bare controller-manager

#分发至master节点
scp controller-manager*.pem root@10.0.50.101:/etc/kubernetes/pki/
scp controller-manager*.pem root@10.0.50.102:/etc/kubernetes/pki/
scp controller-manager*.pem root@10.0.50.103:/etc/kubernetes/pki/

创建controller-manager的kubeconfig

#创建kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=../ca.pem --embed-certs=true --server=https://10.0.50.254:6443 --kubeconfig=controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager --client-certificate=controller-manager.pem --client-key=controller-manager-key.pem --embed-certs=true --kubeconfig=controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=controller-manager.kubeconfig

#分发至master节点
scp controller-manager.kubeconfig root@10.0.50.101:/etc/kubernetes/
scp controller-manager.kubeconfig root@10.0.50.102:/etc/kubernetes/
scp controller-manager.kubeconfig root@10.0.50.103:/etc/kubernetes/

创建service文件

cat > kube-controller-manager.service<<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \\
--allocate-node-cidrs=true \\
--bind-address=127.0.0.1 \\
--cluster-cidr=172.19.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
--leader-elect=true \\
--node-cidr-mask-size=24 \\
--port=0 \\
--root-ca-file=/etc/kubernetes/pki/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \\
--service-cluster-ip-range=10.120.0.0/16 \\
--use-service-account-credentials=true \\
--port=10252 \\
--secure-port=10257 \\
--experimental-cluster-signing-duration=87600h \\
--feature-gates=RotateKubeletServerCertificate=true \\
--horizontal-pod-autoscaler-use-rest-clients=true \\
--horizontal-pod-autoscaler-sync-period=10s \\
--tls-cert-file=/etc/kubernetes/pki/controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/pki/controller-manager-key.pem \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#分发至master节点
scp kube-controller-manager.service root@10.0.50.101:/etc/systemd/system/
scp kube-controller-manager.service root@10.0.50.102:/etc/systemd/system/
scp kube-controller-manager.service root@10.0.50.103:/etc/systemd/system/

启动服务(在各master节点上)

#启动服务
systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager

#检查状态
service kube-controller-manager status

#查看leader
kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

部署scheduler(master节点)

scheduler启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

创建证书和私钥

#新建一个目录存放证书
cd target && mkdir scheduler && cd scheduler
cat > scheduler-csr.json<<EOF
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"127.0.0.1",
"10.0.50.101",
"10.0.50.102",
"10.0.50.103"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hangzhou",
"L": "Hangzhou",
"O": "system:kube-scheduler",
"OU": "System"
}
]
}
EOF

#使用ca证书生成证书、私钥
cfssl gencert -ca=../ca.pem -ca-key=../ca-key.pem -config=../ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare kube-scheduler

创建scheduler的kubeconfig

#创建kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=../ca.pem --embed-certs=true --server=https://10.0.50.254:6443 --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

#分发至master节点
scp kube-scheduler.kubeconfig root@10.0.50.101:/etc/kubernetes/
scp kube-scheduler.kubeconfig root@10.0.50.102:/etc/kubernetes/
scp kube-scheduler.kubeconfig root@10.0.50.103:/etc/kubernetes/

创建service文件

cat > kube-scheduler.service<<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \\
--address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#分发至master节点
scp kube-scheduler.service root@10.0.50.101:/etc/systemd/system/
scp kube-scheduler.service root@10.0.50.102:/etc/systemd/system/
scp kube-scheduler.service root@10.0.50.103:/etc/systemd/system/

启动服务(在各master节点上)

#启动服务
systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler


#检查状态
service kube-scheduler status

#查看leader
kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

至此,master节点上所有组件安装完成

部分配置文件,详见GITEE

标签:k8s

发表评论

评论列表(有2条评论571人围观)
游客2021-02-20 17:21:05

controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused

controller-manager Unhealthy HTTP probe failed with statuscode: 400
kube-controller-manager 服务正常启动无报错

kubectl get cs 起不来

[root@k8s-master1 controller-manager]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy HTTP probe failed with statuscode: 400
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}

凌杰2021-02-23 15:20:12

这个是健康检查的问题,实际controller-manager已经起来了,不影响正常使用。如果要解决这个问题,可以在controller-manager的service文件中参数修改为如下: --port=10252 --secure-port=10257