大道至简

学必求其心得,业必贵于专精。

0%

K8S集群创建(基于rancher)

K8S环境及相关工具部署
本次安装环境描述
系统版本:CentOS Linux release 7.6 x64
docker版本:18.06.2-ce
rancher-server版本:2.3.5

安装docker:

若yum找不到docker-ce包则执行以下操作:

卸载老版本的 docker 及其相关依赖

1
sudo yum remove docker docker-common container-selinux docker-selinux docker-engine

更新yum

1
yum update

安装 yum-utils,它提供了 yum-config-manager,可用来管理yum源

1
sudo yum install -y yum-utils

添加yum源

1
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

更新索引

1
sudo yum makecache fast

安装docker:yum -y install docker-ce-18.06.2.ce-3.el7

设置为开机启动:systemctl enable docker
启动:systemctl start docker
查看启动状态:systemctl status docker

给docker配置阿里源加速器:

创建/修改配置文件/etc/docker/daemon.json

1
2
3
4
5
[root@cluster01 ~]# vi /etc/docker/daemon.json 
{
"registry-mirrors": ["https://5ljrxno5.mirror.aliyuncs.com"]
}
[root@cluster01 ~]#

重新加载新修改的配置,重启docker

1
2
systemctl daemon-reload 
systemctl restart docker

安装ntp,确保服务器时间同步:

1
2
3
4
5
6
7
yum -y install ntp
systemctl enable ntpd
systemctl start ntpd
timedatectl set-ntp yes
ntpdate -u cn.pool.ntp.org
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
watch -n 1 'date'

rancher server部署

命令:

1
# docker run -d --restart=unless-stopped -p 8080:80 -p 8443:443 -v /data/rancher/ranchermaster:/var/lib/rancher rancher/rancher:latest

等到容器起来,就能访问了:

通过IP+端口(https://192.168.154.11:8443去访问并配置admin用户的密码:awifi@123

登录后,在WEB上创建集群

以上配置说明:

1
2
3
4
5
6
7
8
9
10
11
Cluster Name:填写集群名称(crystal-cluster)
Member Roles:配置访问该集群的用户,及每个用户对集群的操作权限,
Label & Annotations:为集群配置标签和注释,按需配置。
Kubernetes Options:
—>Kubernetes Version:选择kubernets版本
—>Network Provider:选择需要的网络驱动
—>Project Network Isolation:配置namespace之间的网络隔离
—>Cloud Provider:选择云提供商。本文采用VMware虚拟机部署,默认选择无
Private Registry:配置私有镜像仓库
Advanced Options:配置自定义集群参数,按需配置。
Authorized Endpoint:配置授权访问地址

以上参数均能根据页面的提示和自己的需求做出相应的配置,本文实验除了Cluster Name之外,其余均选择了默认配置。
然后点击”Next”,到下一步添加主机命令,选择主机角色:
(选择主机角色,端口放行参考: https://rancher.com/docs/rancher/v2.x/en/installation/references/)
角色选择:每台主机可以运行多个角色。每个集群至少需要一个Etcd角色、一个Control角色、一个Worker角色
选择好角色后,下面的代码框会根据选择的角色生成对应的主机命令,然后将命令复制到node主机上运行(注意node节点是否已经安装好支持版本的docker,并且docker处于running的状态)。

把以上脚本在第二个节点(192.168.154.12)上运行

1
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.2 --server https://192.168.154.11 --token kdzzxvsqqm9chc6dl2f7wlz5jpzbnqv9xwjl2qmz4g8tqknqfbjqnq --ca-checksum c37e2a72f73e34919ce27862c8832c3e0c3fb6e4005fa7cf5346eb666a5cbd6d --etcd --controlplane –worker

运行完,再在第三个节点上运行以下脚本:

1
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.2 --server https://192.168.154.11 --token kdzzxvsqqm9chc6dl2f7wlz5jpzbnqv9xwjl2qmz4g8tqknqfbjqnq --ca-checksum c37e2a72f73e34919ce27862c8832c3e0c3fb6e4005fa7cf5346eb666a5cbd6d --worker

至此,rancher配置K8S可以在WEB上按要求进行配置。

注:设置多节点时,必须关闭各个节点的防火墙:

1
2
3
4
5
6
7
8
9
10
11
# 查看状态
systemctl status firewalld
service iptables status
# 停止
systemctl stop firewalld
service iptables stop
# 禁止开机启动
systemctl disable firewalld
chkconfig iptables off
# 关闭SELINUX
sed -i 's/enforcing/disabled/g' /etc/selinux/config; setenforce 0

安装kubectl

下载指定版本kubernetes-client-linux-amd64.tar.gz(版本号需要和kubernetes一致):
https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
安装:

1
2
3
4
5
tar -zxvf kubernetes-client-linux-amd64.tar.gz
chmod +x ./kubernetes/client/bin/kubectl
sudo mv ./kubernetes/client/bin/kubectl /usr/local/bin/kubectl
sudo ln -s /usr/local/bin/kubectl /usr/bin/kubectl
在每个节点上添加kubeconfig文件:

安装helm

服务器上传helm-v2.12.3-linux-amd64.tar.gz

1
2
3
4
mkdir -p /usr/local/helm/
cd /usr/local/helm/
tar zxvf helm-v2.12.3-linux-amd64.tar.gz
cp -s /usr/local/helm/linux-amd64/helm /usr/local/bin/

创建 tiller-rbac-config.yaml 文件,加入以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# vim tiller-rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system

运行以下命令创建 tiller 服务账号(必须保证kubectl和kubernetes版本相同,否则可能会出现SchemaError的报错):
kubectl apply -f tiller-rbac-config.yaml
下载tiller镜像(可以使用docker save -o和docker load -i命令加载镜像,由于pod是随机分配,因此k8s集群上的每个node都要load该镜像,tiller版本需要和helm相同):alpha-harbor.51iwifi.com/k8s-depend/gcr.io/kubernetes-helm/tiller:v2.12.3
执行以下命令:

1
helm init --service-account tiller --tiller-image alpha-harbor.51iwifi.com/k8s-depend/gcr.io/kubernetes-helm/tiller:v2.12.3

可能出现google的repo无法访问:

此时执行以下命令,指定国内repo:

1
helm init --service-account tiller --tiller-image alpha-harbor.51iwifi.com/k8s-depend/gcr.io/kubernetes-helm/tiller:v2.12.3 --stable-repo-url http://mirror.azure.cn/kubernetes/charts/ --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

查看tiller-deploy服务

1
kubectl get svc -n kube-system

查看helm version

1
helm version

查询repo上的内容

1
helm search

修改tiller svc 对外端口(30101)

kubectl edit svc -n kube-system tiller-deploy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 配置nfs挂载环境
## 安装NFS服务
yum install nfs-utils
yum install rpcbind
## 设置共享文件夹
mkdir -p /nfs/data/
chmod 755 /nfs/data
## 配置文件修改
vim /etc/exports加入以下内容:
/nfs/ *(async,insecure,no_root_squash,no_subtree_check,rw)
/nfs/data/ *(async,insecure,no_root_squash,no_subtree_check,rw)
使配置生效:exportfs -r
## 启动服务
systemctl start rpcbind
systemctl start nfs
systemctl enable rpcbind # 设置开机启动
systemctl enable nfs-server.service # 设置开机启动

把镜像加载到各个节点上

alpha-harbor.51iwifi.com/k8s-depend/quay.io/external_storage/nfs-client-provisioner:latest

kubectl create -f cluster-admin.rbac.yaml 创建权限

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(以下为cluster-admin.rbac.yaml内容)
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: dcm-rbac
name: k8s-admin
namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: k8s-admin-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: k8s-admin
namespace: kube-system

kubectl create -f nfs-client-provisioner.yaml创建nfs-client-provisioner

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: k8s-admin
containers:
- name: nfs-client-provisioner
image: alpha-harbor.51iwifi.com/k8s-depend/quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: Never
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 172.20.131.251
- name: NFS_PATH
value: /nfs/data
volumes:
- name: nfs-client-root
nfs:
server: 172.20.131.251
path: /nfs/data

kubectl create -f nfs-client-class.yaml 创建storageclass

1
2
3
4
5
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs

创建pvc使用对应nfs-client-provisioner验证nfs是否配置成功

1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl create -f nfs-client-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-client-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
storageClassName: managed-nfs-storage
resources:
requests:
storage: 100Mi
1
2
kubectl get pvc nfs-client-pvc -n kube-system 查看pvc 是否自动关联上pv
kubectl delete pvc nfs-client-pvc -n kube-system 验证结束后删除pvc