CentOS7 deploys k8s cluster

Environmental introduction and preparation##

The operating system uses Centos7.3 64-bit, the details are as follows:

[ root@k8s-master ~]# uname -a
Linux k8s-master 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 1922:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[ root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511(Core)

Host Information###

This article has prepared three machines for deploying the operating environment of k8s. The details are as follows:

Node and function hostname IP
master、etcd、registry k8s-master 10.211.55.6
node1 k8s-node-1 10.211.55.7
node2 k8s-node-2 10.211.55.8

Set the host names of the three machines: Execute on master:

1 [ root@localhost ~]# hostnamectl --static set-hostname k8s-master

Execute on node1:

1 [ root@localhost ~]# hostnamectl --static set-hostname k8s-node-1

Execute on node2:

1 [ root@localhost ~]# hostnamectl --static set-hostname k8s-node-2

To set up hosts on the three machines, execute the following commands:

echo '10.211.55.6    k8s-master
10.211.55.6 etcd
10.211.55.6 registry
10.211.55.7 k8s-node-110.211.55.8   k8s-node-2' >>/etc/hosts

Turn off the firewall on the three machines###

systemctl disable firewalld.service
systemctl stop firewalld.service

Deploy etcd

K8s depends on etcd to run, and etcd needs to be deployed first. This article uses yum to install:

1 # yum install -y etcd

The default configuration file of etcd installed by yum is in /etc/etcd/etcd.conf. Edit the configuration file and change the following information:

ETCD_NAME=master
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"

Start and verify status

# systemctl start etcd
# systemctl enable etcd
# etcdctl set testdir/testkey0 00
# etcdctl get testdir/testkey0 
0
# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy

Extension: Etcd cluster deployment see——http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html

Deploy master

Install Docker

1 [ root@k8s-master ~]# yum install -y docker

Configure Docker configuration file####

Make it allow to pull images from the registry. Add the following line: OPTIONS='–insecure-registry registry:5000'

[ root@k8s-master ~]# vim /etc/sysconfig/docker

# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'if[-z "${DOCKER_CERT_PATH}"]; then
 DOCKER_CERT_PATH=/etc/docker
fi

OPTIONS='--insecure-registry registry:5000'

Set up and use Alibaba Cloud's docker accelerator####

cp -n /lib/systemd/system/docker.service /etc/systemd/system/docker.service
sed -i "s|ExecStart=/usr/bin/dockerd-current|ExecStart=/usr/bin/dockerd-current --registry-mirror=<your accelerate address>|g"/etc/systemd/system/docker.service
systemctl daemon-reload
systemctl restart docker.service

Set boot-up and start service

# systemctl enable docker.service
# systemctl restart docker.service

Install kubernets

1 [ root@k8s-master ~]# yum install -y kubernetes

Build and run the registry

docker pull registry:2

// Associate the registry data volume with the local for easy management and backup of registry data

docker run -d -p 5000:5000 --name registry -v /data/registry:/var/lib/registry registry:2

Configure and start kubernetes

The following components need to be run on kubernetes master:

Correspondingly, change the color information in the following configurations:

Modify /etc/kubernetes/apiserver

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0”

KUBE_API_PORT=”–port=8080”

KUBE_ETCD_SERVERS=”–etcd-servers=http://etcd:2379“

KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”

Modify /etc/kubernetes/config

1 KUBE_MASTER="–master=http://k8s-master:8080"

Start the service and set the boot auto-start

# systemctl enable kube-apiserver.service
# systemctl start kube-apiserver.service
# systemctl enable kube-controller-manager.service
# systemctl start kube-controller-manager.service
# systemctl enable kube-scheduler.service
# systemctl start kube-scheduler.service

Deploy node

Install docker

See master's docker installation steps

Install kubernets

See master's kubernets installation steps

Configure and start kubernetes

The following components need to be run on the kubernetes node:

Correspondingly, you need to change the following configuration information:

Modify /etc/kubernetes/config

1 KUBE_MASTER="–master=http://k8s-master:8080"

Modify /etc/kubernetes/kubelet

KUBELET_ADDRESS="–address=0.0.0.0"
KUBELET_HOSTNAME="–hostname-override=k8s-node-1"(Note that the second station needs to write k8s-node-2)
KUBELET_API_SERVER="–api-servers=http://k8s-master:8080"

Start the service and set the boot auto-start

systemctl enable kubelet.service

systemctl start kubelet.service

systemctl enable kube-proxy.service

systemctl start kube-proxy.service

View status

View the nodes and node status in the cluster on the master

# kubectl -s http://k8s-master:8080get node
NAME         STATUS    AGE
k8s-node-1   Ready     3m
k8s-node-2   Ready     16s
# kubectl get nodes
NAME         STATUS    AGE
k8s-node-1   Ready     3m
k8s-node-2   Ready     43s

So far, a kubernetes cluster has been built.

Create Overlay Network——Flannel

Install Flannel

Execute the following commands on master and node to install

1 # yum install -y flannel

Configure Flannel

Edit /etc/sysconfig/flanneld on both master and node, modify the following configuration:

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

Configure the key about flannel in etcd

Flannel uses Etcd for configuration to ensure the configuration consistency between multiple Flannel instances, so the following configuration is required on etcd:

# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'{"Network":"10.0.0.0/16"}

start up###

After starting Flannel, you need to restart docker and kubernete in turn. Execute in master:

systemctl enable flanneld.service 
systemctl start flanneld.service 
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

Execute on node:

systemctl enable flanneld.service

systemctl start flanneld.service

service docker restart

systemctl restart kubelet.service

systemctl restart kube-proxy.service

Flannel Network###

Flannel is considered the simplest network in k8s. Here you can find an article to help you understand Flannel network.

test##

# docker pull nginx #Pull an nginx mirror from the external network registry
# docker tag nginx registry:5000/nginx #Tag the local mirror
# docker push registry:5000/nginx #Push to the local registry
# docker rmi registry:5000/nginx #Delete local mirror

cat << EOF >nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: nginx
spec:
 replicas:2
 template:
 metadata:
  labels:
  app: nginx
 spec:
  containers:- name: nginx
  image: registry:5000/nginx
  ports:- containerPort:80
  resources:
   requests:
   cpu: 400m
EOF
# kubectl create -f nginx.yaml #Create nginx-dpmt deployment

cat << EOF >nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
 name: nginx-svc
 labels:
 app: nginx-svc
spec:
 type: NodePort
 selector:
 app: nginx
 ports:- port:80
 targetPort:80
 nodePort:30088
EOF
# kubectl create -f nginx-svc.yaml #Create nginx-svc service
# kubectl describe service nginx-svc
Name:			nginx-svc
Namespace:default
Labels:			app=nginx-svc
Selector:		app=nginx
Type:			NodePort
IP:10.254.53.185
Port:<unset>80/TCP
NodePort:<unset>30088/TCP
Endpoints:10.0.19.2:80,10.0.4.2:80
Session Affinity:	None
No events.
# curl http://k8s-node-1:30088/ #Test nginx service through nodePort

Two problems were encountered during the test:

  1. The pod service has been in the ContainerCreating state. Later, refer to here and install rhsm-related packages to solve it.
  2. The name of thespec.selector.app in the nginx-svc.yaml file is inconsistent with the spec.template.metadata.labels.app in the nginx.yaml, which results in the inability to access the service through the NodePort.

reference##

  1. http://qinghua.github.io/kubernetes-deployment/
  2. http://wdxtub.com/2017/06/05/k8s-note/
  3. https://jimmysong.io/kubernetes-handbook/guide/accessing-kubernetes-pods-from-outside-of-the-cluster.html
  4. http://tonybai.com/2017/01/17/understanding-flannel-network-for-kubernetes/
  5. http://www.cnblogs.com/puroc/p/6297851.html
  6. https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/

Recommended Posts

CentOS7.3.1611 deploys k8s1.5.2 cluster
CentOS7 deploys k8s cluster
Centos7 deploys Kubernetes cluster
Centos7 install k8s cluster 1.15.0 version
k8s practice (1): Centos 7.6 deployment k8s (v1.14.2) cluster
Centos7 deploys HAproxy to implement Nginx cluster
k8s practice (15): Centos7.6 deploys k8s v1.16.4 high-availability cluster (active and standby mode)
Build k8s1.9.9 on centos7
CentOS6 install couchdb2 cluster
CentOS 7 deploys RabbitMQ service
CentOS 6.8 deploy zookeeper cluster
Centos7 build Kubernetes cluster
Centos7 mqtt cluster installation
CentOS7 deploys NFS service
CentOS cluster related issues
Rapid deployment of Kubernetes (k8s) cluster in CentOS7 environment
Centos7.4 deployment configuration Elasticsearch5.6 cluster
CentOS7 install rabbitmq cluster (binary)
Glusterfs cluster installation on Centos7
CentOS 7 Galera Cluster installation guide
Redis cluster installation under CentOS
CentOS deploys Harbor mirror warehouse
CentOS7.7 deploy k8s (1 master + 2 node)
CentOs7.3 build SolrCloud cluster service
Centos7 deploys python3 virtual environment
Detailed steps to install and configure k8s cluster in centos 7
CentOS7.7 deploy k8s (3 master + 3 node + 1 client)
CentOS7.7 deploy k8s + Prometheus (1 master + 2 node)
CentOS 8 (2)
RabbitMQ cluster deployment record under Centos6.9
Centos7 hadoop cluster installation and configuration
Elasticsearch cluster deployment record under CentOS7
Binary installation of k8s cluster (1)-opening
CentOS 8 (1)
CentOS8 deploys KMS service to activate Office
(1) Centos7 installation to build a cluster environment
Simple practice of RHCS cluster in CentOS6