K8S配置StoregeClass(nfs,ceph-rdb,cephfs,阿里云NAS)

发布时间:2020-11-07 21:34:50阅读:(1043)

k8s支持多种存储类,本文介绍几种常用的存储类:nfs、ceph rdb、cephfs、阿里云文件存储(NAS)

各存储类的区别详见:存储类

NFS

首先,你需要准备一台nfs服务器,我这里的服务器是:10.0.30.15, path: /data

注意:所有的worker节点需要安装nfs-utils,否则将无法挂载

配置rbac

nfs-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

部署控制器

nfs-controller.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: nfs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-shanghai.aliyuncs.com/jieee/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs # 注意,此处要全局唯一
- name: NFS_SERVER
value: 10.0.30.15 # 服务器地址
- name: NFS_PATH
value: /data # 挂载卷
volumes:
- name: nfs-client-root
nfs:
server: 10.0.30.15 # 服务器地址
path: /data # 挂载卷

创建StoregeClass

nfs-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs
provisioner: fuseim.pri/ifs # 这里要和controller中的PROVISIONER_NAME保持一致
parameters:
archiveOnDelete: "false"

部署

kubectl create ns nfs
kubectl apply -f nfs-rbac.yaml
kubectl apply -f nfs-controller.yaml
kubectl apply -f nfs-sc.yaml

测试

test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc1
annotations:
volume.beta.kubernetes.io/storage-class: nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

阿里云文件存储(NAS)

此方案适用于搭建在阿里云中的k8s集群,使用阿里云文件存储做持久化

首先需要创建一个文件存储,获取服务地址(注意,需要与服务器在同一个区域),如:abcdefg-hijk.cn-shanghai.nas.aliyuncs.com

注意:所有的worker节点需要安装nfs-utils,否则将无法挂载

配置rbac

alinas-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: nas
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
namespace: nas
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: nas
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io

创建控制器

alinas-controller.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
name: alicloud-nas-controller
namespace: nas
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: alicloud-nas-controller
template:
metadata:
labels:
app: alicloud-nas-controller
spec:
tolerations:
- operator: "Exists"
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
priorityClassName: system-node-critical
serviceAccount: nfs-provisioner
hostNetwork: true
containers:
- name: nfs-provisioner
image: egistry.cn-shanghai.aliyuncs.com/jieee/alicloud-nas-controller:v1.14.3.8
env:
- name: PROVISIONER_NAME
value: alicloud/nas
securityContext:
privileged: true
volumeMounts:
- mountPath: /data
name: nas
volumes:
- hostPath:
path: /data
name: nas

创建StoregeClass

alinas-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ali-nas
mountOptions:
- noresvport
- vers=4.0
parameters:
server: "abcdefg-hijk.cn-shanghai.nas.aliyuncs.com:/" # 服务地址
driver: nfs
provisioner: alicloud/nas # 此处需要与控制器中的PROVISIONER_NAME保存一致
reclaimPolicy: Delete

部署

kubectl create ns nas
kubectl apply -f nas-rbac.yaml
kubectl apply -f nas-controller.yaml
kubectl apply -f nas-sc.yaml

测试

test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc2
annotations:
volume.beta.kubernetes.io/storage-class: ali-nas
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

Ceph RDB

首先,你需要一个ceph集群,ceph集群搭建详见:Centos7搭建ceph集群

注意:所有的worker节点需要安装ceph-common,否则将无法挂载

获取ceph token

ceph auth get-key client.admin
#AQBApJVf0u7bBxAAxBTgfcm/TxlGPOO0d9Ngqw==

创建pool

ceph osd pool create kube 64

创建secret

#注意:type必须是kubernetes.io/rbd,否则无法创建pvc
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-literal=key='AQBApJVf0u7bBxAAxBTgfcm/TxlGPOO0d9Ngqw==' -n ceph

配置rbac

ceph-rbac.yaml

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: ceph
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
[root@jie-master1 rdb]# cat serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
namespace: ceph

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: ceph
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: ceph
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
namespace: ceph
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
namespace: ceph
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
[root@jie-master1 rdb]# cat rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
namespace: ceph
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: ceph

部署控制器

ceph-controller.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: ceph
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: registry.cn-shanghai.aliyuncs.com/jieee/rbd-provisioner:latest
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd

创建StoregeClass

ceph-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rdb
namespace: ceph
provisioner: kubernetes.io/rbd # 这里要和controller中的PROVISIONER_NAME保持一致
parameters:
monitors: 10.0.30.11:6789,10.0.30.12:6789,10.0.30.13:6789 # ceph monitor 地址
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: ceph
pool: kube
userId: admin
userSecretName: ceph-secret
userSecretNamespace: ceph

部署

kubectl create ns ceph
kubectl apply -f ceph-rbac.yaml
kubectl apply -f ceph-controller.yaml
kubectl apply -f ceph-sc.yaml

测试

test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc1
annotations:
volume.beta.kubernetes.io/storage-class: ceph-rdb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

CephFS

首先,你需要一个cepfs服务,cephfs搭建详见:Centos7搭建ceph集群

注意:所有的worker节点需要安装ceph-common,否则将无法挂载

获取ceph token

ceph auth get-key client.admin |base64
#QVFCQXBKVmYwdTdiQnhBQXhCVGdmY20vVHhsR1BPTzBkOU5ncXc9PQ==

创建secret

cephfs-secret.yaml

apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: cephfs
data:
key: QVFCQXBKVmYwdTdiQnhBQXhCVGdmY20vVHhsR1BPTzBkOU5ncXc9PQ==

配置rbac

cephfs-rbac.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
namespace: cephfs
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: cephfs
roleRef:
kind: ClusterRole
name: cephfs-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cephfs-provisioner
namespace: cephfs
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cephfs-provisioner
namespace: cephfs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-provisioner
namespace: cephfs

创建控制器

cephfs-controller.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: cephfs-provisioner
namespace: cephfs
labels:
app: cephfs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: cephfs-provisioner
template:
metadata:
labels:
app: cephfs-provisioner
spec:
containers:
- name: cephfs-provisioner
image: "registry.cn-shanghai.aliyuncs.com/jieee/cephfs-provisioner:latest"
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
- name: PROVISIONER_SECRET_NAMESPACE
value: cephfs
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
serviceAccount: cephfs-provisioner

创建StoregeClass

cephfs-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cephfs
provisioner: ceph.com/cephfs # 这里要和controller中的PROVISIONER_NAME保持一致
volumeBindingMode: WaitForFirstConsumer
parameters:
monitors: 10.0.30.11:6789,10.0.30.12:6789,10.0.30.13:6789 # ceph monitor地址
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: "cephfs"
claimRoot: /kube

部署

kubectl create ns cephfs
kubectl apply -f cephfs-rbac.yaml
kubectl apply -f cephfs-controller.yaml
kubectl apply -f cephfs-sc.yaml

测试

test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc1
annotations:
volume.beta.kubernetes.io/storage-class: cephfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

本文部分配置文件,详见GITEE

发表评论

评论列表(有0条评论1043人围观)
暂无评论