更新 'kubernetes-MD/基于Kubernetes构建ES集群.md'

This commit is contained in:
diandian 2024-04-20 18:23:01 +08:00
parent 7c99641130
commit bf403a6190
1 changed files with 313 additions and 313 deletions

View File

@ -1,314 +1,314 @@
<h1><center>基于Kubernetes集群构建ES集群</center></h1> <h1><center>基于Kubernetes集群构建ES集群</center></h1>
作者:行癫(盗版必究) 作者:行癫(盗版必究)
------ ------
## 一:环境准备 ## 一:环境准备
#### 1.Kubernetes集群环境 #### 1.Kubernetes集群环境
| 节点 | 地址 | | 节点 | 地址 |
| :---------------: | :---------: | | :---------------: | :---------: |
| Kubernetes-Master | 10.9.12.206 | | Kubernetes-Master | 10.9.12.206 |
| Kubernetes-Node-1 | 10.9.12.205 | | Kubernetes-Node-1 | 10.9.12.205 |
| Kubernetes-Node-2 | 10.9.12.204 | | Kubernetes-Node-2 | 10.9.12.204 |
| Kubernetes-Node-3 | 10.9.12.203 | | Kubernetes-Node-3 | 10.9.12.203 |
| DNS服务器 | 10.9.12.210 | | DNS服务器 | 10.9.12.210 |
| 代理服务器 | 10.9.12.209 | | 代理服务器 | 10.9.12.209 |
| NFS存储 | 10.9.12.250 | | NFS存储 | 10.9.12.250 |
#### 2.Kuboard集群管理 #### 2.Kuboard集群管理
![image-20240420164922730](https://diandiange.oss-cn-beijing.aliyuncs.com/image-20240420164922730.png) ![image-20240420164922730](https://diandiange.oss-cn-beijing.aliyuncs.com/image-20240420164922730.png)
## 二构建ES集群 ## 二构建ES集群
#### 1.持久化存储构建 #### 1.持久化存储构建
1.NFS服务器部署 1.NFS服务器部署
2.创建共享目录 2.创建共享目录
本次采用脚本创建,脚本如下 本次采用脚本创建,脚本如下
```shell ```shell
[root@xingdiancloud-1 ~]# cat nfs.sh [root@xingdiancloud-1 ~]# cat nfs.sh
#!/bin/bash #!/bin/bash
read -p "请输入您要创建的共享目录:" dir read -p "请输入您要创建的共享目录:" dir
if [ -d $dir ];then if [ -d $dir ];then
echo "请重新输入共享目录: " echo "请重新输入共享目录: "
read again_dir read again_dir
mkdir $again_dir -p mkdir $again_dir -p
echo "共享目录创建成功" echo "共享目录创建成功"
read -p "请输入共享对象:" ips read -p "请输入共享对象:" ips
echo "$again_dir ${ips}(rw,sync,no_root_squash)" >> /etc/exports echo "$again_dir ${ips}(rw,sync,no_root_squash)" >> /etc/exports
xingdian=`cat /etc/exports |grep "$again_dir" |wc -l` xingdian=`cat /etc/exports |grep "$again_dir" |wc -l`
if [ $xingdian -eq 1 ];then if [ $xingdian -eq 1 ];then
echo "成功配置共享" echo "成功配置共享"
exportfs -rv >/dev/null exportfs -rv >/dev/null
exit exit
else else
exit exit
fi fi
else else
mkdir $dir -p mkdir $dir -p
echo "共享目录创建成功" echo "共享目录创建成功"
read -p "请输入共享对象:" ips read -p "请输入共享对象:" ips
echo "$dir ${ips}(rw,sync,no_root_squash)" >> /etc/exports echo "$dir ${ips}(rw,sync,no_root_squash)" >> /etc/exports
xingdian=`cat /etc/exports |grep "$dir" |wc -l` xingdian=`cat /etc/exports |grep "$dir" |wc -l`
if [ $xingdian -eq 1 ];then if [ $xingdian -eq 1 ];then
echo "成功配置共享" echo "成功配置共享"
exportfs -rv >/dev/null exportfs -rv >/dev/null
exit exit
else else
exit exit
fi fi
fi fi
``` ```
3.创建存储类 3.创建存储类
```yaml ```yaml
[root@xingdiancloud-master ~]# vim namespace.yaml [root@xingdiancloud-master ~]# vim namespace.yaml
apiVersion: v1 apiVersion: v1
kind: Namespace kind: Namespace
metadata: metadata:
name: logging name: logging
[root@xingdiancloud-master ~]# vim storageclass.yaml [root@xingdiancloud-master ~]# vim storageclass.yaml
apiVersion: storage.k8s.io/v1 apiVersion: storage.k8s.io/v1
kind: StorageClass kind: StorageClass
metadata: metadata:
annotations: annotations:
k8s.kuboard.cn/storageNamespace: logging k8s.kuboard.cn/storageNamespace: logging
k8s.kuboard.cn/storageType: nfs_client_provisioner k8s.kuboard.cn/storageType: nfs_client_provisioner
name: data-es name: data-es
parameters: parameters:
archiveOnDelete: 'false' archiveOnDelete: 'false'
provisioner: nfs-data-es provisioner: nfs-data-es
reclaimPolicy: Retain reclaimPolicy: Retain
volumeBindingMode: Immediate volumeBindingMode: Immediate
``` ```
4.创建存储卷 4.创建存储卷
```yaml ```yaml
[root@xingdiancloud-master ~]# vim persistenVolume.yaml [root@xingdiancloud-master ~]# vim persistenVolume.yaml
apiVersion: v1 apiVersion: v1
kind: PersistentVolume kind: PersistentVolume
metadata: metadata:
annotations: annotations:
pv.kubernetes.io/bound-by-controller: 'yes' pv.kubernetes.io/bound-by-controller: 'yes'
finalizers: finalizers:
- kubernetes.io/pv-protection - kubernetes.io/pv-protection
name: nfs-pv-data-es name: nfs-pv-data-es
spec: spec:
accessModes: accessModes:
- ReadWriteMany - ReadWriteMany
capacity: capacity:
storage: 100Gi storage: 100Gi
claimRef: claimRef:
apiVersion: v1 apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
name: nfs-pvc-data-es name: nfs-pvc-data-es
namespace: kube-system namespace: kube-system
nfs: nfs:
path: /data/es-data path: /data/es-data
server: 10.9.12.250 server: 10.9.12.250
persistentVolumeReclaimPolicy: Retain persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-storageclass-provisioner storageClassName: nfs-storageclass-provisioner
volumeMode: Filesystem volumeMode: Filesystem
``` ```
注意存储类和存储卷也可以使用Kuboard界面创建 注意存储类和存储卷也可以使用Kuboard界面创建
#### 2.设定节点标签 #### 2.设定节点标签
```shell ```shell
[root@xingdiancloud-master ~]# kubectl label nodes xingdiancloud-node-1 es=log [root@xingdiancloud-master ~]# kubectl label nodes xingdiancloud-node-1 es=log
``` ```
注意: 注意:
所有运行ES的节点需要进行标签的设定 所有运行ES的节点需要进行标签的设定
目的配合接下来的StatefulSet部署ES集群 目的配合接下来的StatefulSet部署ES集群
#### 3.ES集群部署 #### 3.ES集群部署
注意由于ES集群每个节点需要唯一的网络标识并需要持久化存储Deployment不能实现该特点只能进行无状态应用的部署故本次将采用StatefulSet进行部署。 注意由于ES集群每个节点需要唯一的网络标识并需要持久化存储Deployment不能实现该特点只能进行无状态应用的部署故本次将采用StatefulSet进行部署。
```yaml ```yaml
apiVersion: apps/v1 apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
name: es name: es
namespace: logging namespace: logging
spec: spec:
serviceName: elasticsearch serviceName: elasticsearch
replicas: 3 replicas: 3
selector: selector:
matchLabels: matchLabels:
app: elasticsearch app: elasticsearch
template: template:
metadata: metadata:
labels: labels:
app: elasticsearch app: elasticsearch
spec: spec:
nodeSelector: nodeSelector:
es: log es: log
initContainers: initContainers:
- name: increase-vm-max-map - name: increase-vm-max-map
image: busybox image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"] command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext: securityContext:
privileged: true privileged: true
- name: increase-fd-ulimit - name: increase-fd-ulimit
image: busybox image: busybox
command: ["sh", "-c", "ulimit -n 65536"] command: ["sh", "-c", "ulimit -n 65536"]
securityContext: securityContext:
privileged: true privileged: true
containers: containers:
- name: elasticsearch - name: elasticsearch
image: 10.9.12.201/xingdian/es:7.6.2 image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
ports: ports:
- name: rest - name: rest
containerPort: 9200 containerPort: 9200
- name: inter - name: inter
containerPort: 9300 containerPort: 9300
resources: resources:
limits: limits:
cpu: 500m cpu: 500m
memory: 4000Mi memory: 4000Mi
requests: requests:
cpu: 500m cpu: 500m
memory: 3000Mi memory: 3000Mi
volumeMounts: volumeMounts:
- name: data - name: data
mountPath: /usr/share/elasticsearch/data mountPath: /usr/share/elasticsearch/data
env: env:
- name: cluster.name - name: cluster.name
value: k8s-logs value: k8s-logs
- name: node.name - name: node.name
valueFrom: valueFrom:
fieldRef: fieldRef:
fieldPath: metadata.name fieldPath: metadata.name
- name: cluster.initial_master_nodes - name: cluster.initial_master_nodes
value: "es-0,es-1,es-2" value: "es-0,es-1,es-2"
- name: discovery.zen.minimum_master_nodes - name: discovery.zen.minimum_master_nodes
value: "2" value: "2"
- name: discovery.seed_hosts - name: discovery.seed_hosts
value: "elasticsearch" value: "elasticsearch"
- name: ESJAVAOPTS - name: ESJAVAOPTS
value: "-Xms512m -Xmx512m" value: "-Xms512m -Xmx512m"
- name: network.host - name: network.host
value: "0.0.0.0" value: "0.0.0.0"
- name: node.max_local_storage_nodes - name: node.max_local_storage_nodes
value: "3" value: "3"
volumeClaimTemplates: volumeClaimTemplates:
- metadata: - metadata:
name: data name: data
labels: labels:
app: elasticsearch app: elasticsearch
spec: spec:
accessModes: [ "ReadWriteMany" ] accessModes: [ "ReadWriteMany" ]
storageClassName: data-es storageClassName: data-es
resources: resources:
requests: requests:
storage: 25Gi storage: 25Gi
``` ```
#### 4.创建Services发布ES集群 #### 4.创建Services发布ES集群
```yaml ```yaml
[root@xingdiancloud-master ~]# vim elasticsearch-svc.yaml [root@xingdiancloud-master ~]# vim elasticsearch-svc.yaml
kind: Service kind: Service
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: elasticsearch name: elasticsearch
namespace: logging namespace: logging
labels: labels:
app: elasticsearch app: elasticsearch
spec: spec:
selector: selector:
app: elasticsearch app: elasticsearch
type: NodePort type: NodePort
ports: ports:
- port: 9200 - port: 9200
targetPort: 9200 targetPort: 9200
nodePort: 30010 nodePort: 30010
name: rest name: rest
- port: 9300 - port: 9300
name: inter-node name: inter-node
``` ```
#### 5.访问测试 #### 5.访问测试
注意: 注意:
使用elasticVUE插件访问集群 使用elasticVUE插件访问集群
集群状态正常 集群状态正常
集群所有节点正常 集群所有节点正常
![image-20240420172247845](https://diandiange.oss-cn-beijing.aliyuncs.com/image-20240420172247845.png) ![image-20240420172247845](https://diandiange.oss-cn-beijing.aliyuncs.com/image-20240420172247845.png)
## 三代理及DNS配置 ## 三代理及DNS配置
#### 1.代理配置 #### 1.代理配置
注意: 注意:
部署略 部署略
在此使用Nginx作为代理 在此使用Nginx作为代理
基于用户的访问控制用户和密码自行创建htpasswd 基于用户的访问控制用户和密码自行创建htpasswd
配置文件如下 配置文件如下
```shell ```shell
[root@proxy ~]# cat /etc/nginx/conf.d/elasticsearch.conf [root@proxy ~]# cat /etc/nginx/conf.d/elasticsearch.conf
server { server {
listen 80; listen 80;
server_name es.xingdian.com; server_name es.xingdian.com;
location / { location / {
auth_basic "xingdiancloud kibana"; auth_basic "xingdiancloud kibana";
auth_basic_user_file /etc/nginx/pass; auth_basic_user_file /etc/nginx/pass;
proxy_pass http://地址+端口; proxy_pass http://地址+端口;
} }
} }
``` ```
#### 2.域名解析配置 #### 2.域名解析配置
注意: 注意:
部署略 部署略
配置如下 配置如下
```shell ```shell
[root@www ~]# cat /var/named/xingdian.com.zone [root@www ~]# cat /var/named/xingdian.com.zone
$TTL 1D $TTL 1D
@ IN SOA @ rname.invalid. ( @ IN SOA @ rname.invalid. (
0 ; serial 0 ; serial
1D ; refresh 1D ; refresh
1H ; retry 1H ; retry
1W ; expire 1W ; expire
3H ) ; minimum 3H ) ; minimum
NS @ NS @
A DNS地址 A DNS地址
es A 代理地址 es A 代理地址
AAAA ::1 AAAA ::1
``` ```
#### 3.访问测试 #### 3.访问测试