CentOS7.3.1611 deploys k8s1.5.2 cluster

CentOS7.3.1611 deploys k8s1.5.2 cluster

Just learned that the latest k8s1.5.3 and 1.4.9 were updated 12 hours ago, the installation method should be similar

Reference

Kubernetes Definitive Guide (Second Edition)

http://jevic.blog.51cto.com/2183736/1881455

https://my.oschina.net/u/1791060/blog/830023

http://blog.csdn.net/lic95/article/details/55015284

https://coreos.com/etcd/docs/latest/clustering.html

The following documents are simple and systematic tests of the k8s 1.5.x series: including deployment of clusters, creation of POD, [domain name resolution] (https://cloud.tencent.com/product/cns?from=10680), dashboard, monitoring, reverse proxy, storage, log, and two-way authentication. Practical is not listed. This series of documentation environment deployment uses binary program green installation, applicable to 1.5.2, 1.5.3, 1.5.4 and subsequent versions, just remember to update the sample URL on github at any time.

k8s cluster installation and deployment

http://jerrymin.blog.51cto.com/3002256/1898243

k8s cluster RC, SVC, POD deployment

http://jerrymin.blog.51cto.com/3002256/1900260

Deployment of k8s cluster components kubernetes-dashboard and kube-dns

http://jerrymin.blog.51cto.com/3002256/1900508

K8s cluster monitoring component heapster deployment

http://jerrymin.blog.51cto.com/3002256/1904460

K8s cluster reverse proxy [load balancing] (https://cloud.tencent.com/product/clb?from=10680) component deployment

http://jerrymin.blog.51cto.com/3002256/1904463

k8s cluster mount volume nfs

http://jerrymin.blog.51cto.com/3002256/1906778

k8s cluster mount volume glusterfs

http://jerrymin.blog.51cto.com/3002256/1907274

ELK architecture of k8s cluster log collection

http://jerrymin.blog.51cto.com/3002256/1907282

Architecture

k8s-master install etcd, kubernetes-server/client

k8s-node1 install docker, kubernetes-node/client, flannel

k8s-node2 install docker, kubernetes-node/client, flannel

One, the version installed by YUM is as follows

CentOS7.3.1611 Yum installation

kubernetes-1.4.0-0.1.git87d9d8d.el7

Will install kubernets-master, node, client and related dependencies

kubernetes-master-1.4.0-0.1.git87d9d8d.el7

Will generate three binary programs kube-apiserver kube-controller-manager kube-scheduler

kubernetes-node-1.4.0-0.1.git87d9d8d.el7

Will install many dependent packages including docker-1.12.5-14.el7.centos, will install kubelet kube-proxy

kubernetes-client-1.4.0-0.1.git87d9d8d.el7

Will generate a binary program kubectl

kubernetes-unit-test-1.4.0-0.1.git87d9d8d.el7

Will install many dependent packages including etcd-3.0.15-1.el7, golang, gcc, glibc, rsync, etc.

flannel-0.5.5-2.el7

Will generate a binary program flannel

Second, this article chooses the binary package version to install the latest version for testing

github address:

etct: https://github.com/coreos/etcd/releases

flannel: https://github.com/coreos/flannel/releases

kubernetes: https://github.com/kubernetes/kubernetes/releases

docker: https://docs.docker.com/engine/installation/linux/centos/

k8s 1.5.2

https://dl.k8s.io/v1.5.2/kubernetes-server-linux-amd64.tar.gz

11 binary programs will be generated hyperkube kubectl kubelet kube-scheduler kubeadm kube-controller-manager kube-discovery kube-proxy kube-apiserver kube-dns kubefed

https://dl.k8s.io/v1.5.2/kubernetes-client-linux-amd64.tar.gz

Will generate two binary programs kube-proxy kubefed

etcd 3.1.10

https://github.com/coreos/etcd/releases/download/v3.1.0/etcd-v3.1.0-linux-amd64.tar.gz

docker 1.13.1

https://get.docker.com/builds/Linux/x86_64/docker-1.13.1.tgz

flannel

https://github.com/coreos/flannel/releases/download/v0.7.0/flannel-v0.7.0-linux-amd64.tar.gz

Three, deployment environment

1 ,Ready to work

1 ), the system is minimally installed, and then yum update, upgrade to the latest version CentOS7.3.1611

2 ), set hostname and hosts

[ root@k8s-master ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.17.3.20  k8s-master

172.17.3.7   k8s-node1

172.17.3.8   k8s-node2

3 ), proofreading time

[ root@k8s-master ~]# ntpdate ntp1.aliyun.com &&hwclock -w

4 ), turn off selinux and firewall

[ root@k8s-master ~]# sed -i s'/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

[ root@k8s-master ~]# systemctl disable firewalld; systemctl stop firewalld

Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

5 ), restart the server

2 , Master node deployment

1 ), deploy etcd service (currently single point)

[ root@k8s-master ~]# tar zxvf etcd-v3.1.0-linux-amd64.tar.gz -C /usr/local/

[ root@k8s-master ~]# mv /usr/local/etcd-v3.1.0-linux-amd64/ /usr/local/etcd

[ root@k8s-master ~]# ln -s /usr/local/etcd/etcd /usr/local/bin/etcd

[ root@k8s-master ~]# ln -s /usr/local/etcd/etcdctl /usr/local/bin/etcdctl

Set the systemd service file /usr/lib/systemd/system/etcd.service

[ Unit]

Description=Eted Server

After=network.target

[ Service]

WorkingDirectory=/data/etcd/

EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/usr/local/bin/etcd

Type=notify

Restart=on-failure

LimitNOFILE=65536

[ Install]

WantedBy=multi-user.target

Among them, WorkingDirector represents the directory where etcd data is saved, which needs to be created before starting the etcd service

etcd single point default configuration

[ root@k8s-master ~]# cat /etc/etcd/etcd.conf

ETCD_NAME=k8s1

ETCD_DATA_DIR="/data/etcd"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

etcd service start

[ root@k8s-master ~]# systemctl daemon-reload

[ root@k8s-master ~]# systemctl enable etcd.service

[ root@k8s-master ~]# systemctl start etcd.service

etcd service check

[ root@k8s-master ~]# etcdctl cluster-health

member 869f0c691c5458a3 is healthy: got healthy result from http://0.0.0.0:2379

cluster is healthy

[ root@k8s-master ~]# etcdctl member list

869 f0c691c5458a3: name=k8s1 peerURLs=http://172.17.3.20:2380 clientURLs=http://0.0.0.0:2379 isLeader=true

2 ) Deploy kube-apiserver service

Install kube-apiserver

[ root@k8s-master ~]# tar zxvf kubernetes-server-linux-amd64.tar.gz  -C /usr/local/

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-apiserver /usr/local/bin/kube-apiserver

Other services by the way do a soft link

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/hyperkube /usr/local/bin/hyperkube

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kubeadm /usr/local/bin/kubeadm

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-controller-manager /usr/local/bin/kube-controller-manager

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kubectl  /usr/local/bin/kubectl

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-discovery /usr/local/bin/kube-discovery

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-dns  /usr/local/bin/kube-dns

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kubefed /usr/local/bin/kubefed

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kubelet /usr/local/bin/kubelet

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-proxy /usr/local/bin/kube-proxy

[ root@k8s-master ~]# ln -s /usr/local/kubernetes/server/bin/kube-scheduler /usr/local/bin/kube-scheduler

Configure kubernetes system config

[ root@k8s-master ~]# cat /etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=false"

KUBE_LOG_DIR="--log-dir=/data/logs/kubernetes"

KUBE_LOG_LEVEL="--v=2"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=http://172.17.3.20:8080"

Set the systemd service file /usr/lib/systemd/system/kube-apiserver.service

[ Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

After=etcd.service

[ Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/apiserver

ExecStart=/usr/local/bin/kube-apiserver \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_ETCD_SERVERS \

$KUBE_API_ADDRESS \

$KUBE_API_PORT \

$KUBELET_PORT \

$KUBE_ALLOW_PRIV \

$KUBE_SERVICE_ADDRESSES \

$KUBE_ADMISSION_CONTROL \

$KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[ Install]

WantedBy=multi-user.target

Configure kuber-apiserver startup parameters

[ root@k8s-master ~]# cat /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

KUBE_API_ARGS=" "

Start kube-api-servers service

[ root@k8s-master ~]# systemctl daemon-reload

[ root@k8s-master ~]# systemctl enable kube-apiserver.service

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

[ root@k8s-master ~]# systemctl start kube-apiserver.service

Verification Service

http://172.17.3.20:8080/

3 ) Deploy kube-controller-manager service

Set the systemd service file /usr/lib/systemd/system/kube-controller-manager.service

[ Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=kube-apiserver.service

Requires=kube-apiserver.service

[ Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/controller-manager

ExecStart=/usr/local/bin/kube-controller-manager \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_LOG_DIR \

$KUBE_MASTER \

$KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

[ Install]

WantedBy=multi-user.target

Configure kube-controller-manager startup parameters

[ root@k8s-master ~]# cat /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS=""

Start the kube-controller-manager service

[ root@k8s-master ~]# systemctl daemon-reload

[ root@k8s-master ~]# systemctl enable kube-controller-manager

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

[ root@k8s-master ~]# systemctl start kube-controller-manager

4 ) Deploy kube-scheduler service

Set the systemd service file /usr/lib/systemd/system/kube-scheduler.service

[ Unit]

Description=Kubernetes Scheduler Plugin

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=kube-apiserver.service

Requires=kube-apiserver.service

[ Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/scheduler

ExecStart=/usr/local/bin/kube-scheduler \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_LOG_DIR \

$KUBE_MASTER \

$KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

[ Install]

WantedBy=multi-user.target

Configure kube-schedulerr startup parameters

[ root@k8s-master ~]# cat /etc/kubernetes/schedulerr

KUBE_SCHEDULER_ARGS=""

Start kube-scheduler service

[ root@k8s-master ~]# systemctl daemon-reload

[ root@k8s-master ~]# systemctl enable kube-scheduler

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

[ root@k8s-master ~]# systemctl start kube-scheduler

2 , Node node deployment

1 ) Install docker (or yum instll docker)

[ root@k8s-node1 ~]# tar zxvf docker-1.13.1.tgz -C /usr/local

Here docker is installed and started by default, which is convenient for later testing

[ root@k8s-node1 ~]# systemctl start docker.service

2 ) Install kubernetes client

Install kubelet, kube-proxy

[ root@k8s-master ~]# tar zxvf kubernetes-client-linux-amd64.tar.gz  -C /usr/local/

[ root@k8s-node1 ~]# ln -s /usr/local/kubernetes/client/bin/kubectl /usr/local/bin/kubectl

[ root@k8s-node1 ~]# ln -s /usr/local/kubernetes/client/bin/kubefed /usr/local/bin/kubefed

The kube-proxy package defaults that the client cannot be copied from the server

[ root@k8s-node1 ~]# ln -s /usr/local/kubernetes/client/bin/kube-proxy /usr/local/bin/kube-proxy

[ root@k8s-node1 ~]# ln -s /usr/local/kubernetes/client/bin/kubelet /usr/local/bin/kubelet

3 ) Deploy kubelet service

Configure kubernetes system config

[ root@k8s-node1 ~]# cat /etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=false"

KUBE_LOG_DIR="--log-dir=/data/logs/kubernetes"

KUBE_LOG_LEVEL="--v=2"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=http://172.17.3.20:8080"

Set the systemd service file /usr/lib/systemd/system/kubelet.service

[ Unit]

Description=Kubernetes Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

[ Service]

WorkingDirectory=/data/kubelet

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/kubelet

ExecStart=/usr/local/bin/kubelet \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_LOG_DIR \

$KUBELET_API_SERVER \

$KUBELET_ADDRESS \

$KUBELET_PORT \

$KUBELET_HOSTNAME \

$KUBE_ALLOW_PRIV \

$KUBELET_POD_INFRA_CONTAINER \

$KUBELET_ARGS

Restart=on-failure

[ Install]

WantedBy=multi-user.target

Configure kubelet startup parameters

[ root@k8s-node1 ~]# cat /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"

KUBELET_PORT="--port=10250"

KUBELET_HOSTNAME="--hostname-override=k8s-node1"

KUBELET_API_SERVER="--api-servers=http://172.17.3.20:8080"

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

KUBELET_ARGS=""

Start kubelet service

[ root@k8s-node1 ~]# systemctl daemon-reload

[ root@k8s-node1 ~]# systemctl enable kubelet.service

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[ root@k8s-node1 ~]# systemctl start kubelet.service

4 ), deploy kube-proxy service

Set the systemd service file /usr/lib/systemd/system/kube-proxy.service

[ Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[ Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/proxy

ExecStart=/usr/local/bin/kube-proxy \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_LOG_DIR \

$KUBE_MASTER \

$KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

[ Install]

WantedBy=multi-user.target

Configure kubelet startup parameters

[ root@k8s-node1 ~]# cat /etc/kubernetes/proxy

KUBE_PROXY_ARGS=""

Start kubelet service

[ root@k8s-node1 ~]# systemctl daemon-reload

[ root@k8s-node1 ~]# systemctl enable kube-proxy.service

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

[ root@k8s-node1 ~]# systemctl start kube-proxy.service

Verify that the node is up

[ root@k8s-node1 ~]# kubectl get nodes

NAME        STATUS    AGE

k8s-node1   Ready     9m

3 , Configure the network

1 ), configure etcd

[ root@k8s-master ~]# etcdctl set /k8s/network/config '{ "Network": "10.1.0.0/16" }'

{ " Network": "10.1.0.0/16" }

[ root@k8s-master ~]# etcdctl get /k8s/network/config

{ " Network": "10.1.0.0/16" }

  1. , Install flannel

[ root@k8s-node1 ~]# tar zxvf flannel-v0.7.0-linux-amd64.tar.gz -C /usr/local/flannel

[ root@k8s-node1 ~]# ln -s /usr/local/flannel/flannel /usr/local/bin/flanneld

[ root@k8s-node1 ~]# ln -s /usr/local/flannel/mk-docker-opts.sh   /usr/local/bin/mk-docker-opts.sh

3 ), configure flannel (configuration is more troublesome, start script and startup script refer to the configuration generated during yum installation)

Set the systemd service file /usr/lib/systemd/system/flanneld.service

[ Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service

[ Service]

Type=notify

EnvironmentFile=/etc/sysconfig/flanneld

EnvironmentFile=-/etc/sysconfig/docker-network

ExecStart=/usr/local/bin/flanneld-start $FLANNEL_OPTIONS

ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=on-failure

[ Install]

WantedBy=multi-user.target

RequiredBy=docker.service

Where flanneld-start is

[ root@k8s-node1 ~]# cat /usr/local/bin/flanneld-start

#! /bin/sh

exec /usr/local/bin/flanneld \

" $@"

Edit flannel, set etcd related information

[ root@k8s-node1 ~]# cat /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS="http://172.17.3.20:2379"

FLANNEL_ETCD_PREFIX="/k8s/network"

4 ), start flannel

Note that you must close docker before starting flannel so that flannel will cover the docker0 bridge

[ root@k8s-node1 ~]# systemctl daemon-reload

[ root@k8s-node1 ~]# systemctl enable flanneld.service

[ root@k8s-node1 ~]# systemctl stop docker.service

[ root@k8s-node1 ~]# systemctl start flanneld.service

After the flanneld service is started, it will be divided into subnets according to the configuration in etcd. The divided subnets are used by docker. If docker wants to use it, you have to toss over it. In fact, it is to find a way to pass a few important variables to the use

Note that to make certain variables take effect before starting docker, you need source /run/flannel/docker source /run/flannel/subnet.env

[ root@k8s-node1 ~]# cat /run/flannel/docker

DOCKER_OPT_BIP="--bip=10.1.89.1/24"

DOCKER_OPT_IPMASQ="--ip-masq=true"

DOCKER_OPT_MTU="--mtu=1472"

DOCKER_NETWORK_OPTIONS=" --bip=10.1.89.1/24 --ip-masq=true --mtu=1472"

[ root@k8s-node1 bin]# cat /run/flannel/docker

DOCKER_OPT_BIP="--bip=10.1.89.1/24"

DOCKER_OPT_IPMASQ="--ip-masq=true"

DOCKER_OPT_MTU="--mtu=1472"

DOCKER_NETWORK_OPTIONS=" --bip=10.1.89.1/24 --ip-masq=true --mtu=1472"

[ root@k8s-node1 ~]# cat /run/flannel/subnet.env

FLANNEL_NETWORK=10.1.0.0/16

FLANNEL_SUBNET=10.1.89.1/24

FLANNEL_MTU=1472

FLANNEL_IPMASQ=false

Make sure that docker starts with --bip={FLANNEL_SUBNET} --mtu={FLANNEL_MTU} so that docker0 will become the subnet of flannel0. This startup parameter is through ExecStartPost=/usr/local/bin/mk-docker-opts .sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Finally start docker

[ root@k8s-node1 ~]# systemctl start docker.service

5 ), finally confirm the effect

After completion, confirm that the IP address of the network interface docker0 belongs to the subnet of flannel0

After the network is started, node1 and node2 will add a lot of routing entries, and will automatically turn on the firewall. Although we closed it before, there are many strategies in it for the direct docker0 network of the node to communicate, so that each node passes through the physical network card --flannel0-- Docker0 and container communication

[ root@k8s-node1 ~]# ip addr

6: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500

link/none

inet 10.1.89.0/16 scope global flannel0

valid_lft forever preferred_lft forever

7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN

link/ether 02:42:f1:e4:7c:a3 brd ff:ff:ff:ff:ff:ff

inet 10.1.89.1/24 scope global docker0

valid_lft forever preferred_lft forever

[ root@k8s-node2 ~]# ip addr

6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN

link/ether 02:42:33:a8:38:21 brd ff:ff:ff:ff:ff:ff

inet 10.1.8.1/24 scope global docker0

valid_lft forever preferred_lft forever

7: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500

link/none

inet 10.1.8.0/16 scope global flannel0

valid_lft forever preferred_lft forever

If docker0 of ping node2 on node1 can pass

[ root@k8s-node1 ~]# ping 10.1.8.1

PING 10.1.8.1 (10.1.8.1) 56(84) bytes of data.

64 bytes from 10.1.8.1: icmp_seq=1 ttl=62 time=0.498 ms

64 bytes from 10.1.8.1: icmp_seq=2 ttl=62 time=0.463 ms

Recommended Posts

CentOS7.3.1611 deploys k8s1.5.2 cluster
CentOS7 deploys k8s cluster
Centos7 deploys Kubernetes cluster
Centos7 install k8s cluster 1.15.0 version
k8s practice (1): Centos 7.6 deployment k8s (v1.14.2) cluster
Centos7 deploys HAproxy to implement Nginx cluster
k8s practice (15): Centos7.6 deploys k8s v1.16.4 high-availability cluster (active and standby mode)
Build k8s1.9.9 on centos7
Centos6.9 build rabbitmq 3.6.8 cluster
CentOS6 install couchdb2 cluster
CentOS 7 deploys RabbitMQ service
CentOS 6.8 deploy zookeeper cluster
Centos7 build Kubernetes cluster
Centos7 mqtt cluster installation
CentOS7 deploys NFS service
CentOS cluster related issues
Use Rancher to build a K8s cluster under CentOS7
Rapid deployment of Kubernetes (k8s) cluster in CentOS7 environment
Centos7.4 deployment configuration Elasticsearch5.6 cluster
CentOS7 install rabbitmq cluster (binary)
Glusterfs cluster installation on Centos7
Redis cluster installation under CentOS
CentOS 7 Galera Cluster installation guide
Centos7.2/7.3 cluster install Kubernetes 1.8.4 + Dashboard
Redis cluster installation under CentOS
CentOS deploys Harbor mirror warehouse
CentOS7.7 deploy k8s (1 master + 2 node)
CentOs7.3 build SolrCloud cluster service
Centos7 deploys python3 virtual environment
Detailed steps to install and configure k8s cluster in centos 7
CentOS 8 (2)
RabbitMQ cluster deployment record under Centos6.9
Centos7 hadoop cluster installation and configuration
Build a PXC cluster under CentOS8
Elasticsearch cluster deployment record under CentOS7
Binary installation of k8s cluster (1)-opening
CentOS 8 (1)
CentOS8 deploys KMS service to activate Office
(1) Centos7 installation to build a cluster environment
Simple practice of RHCS cluster in CentOS6