The operating system uses Centos7.3 64-bit, the details are as follows:
[ root@k8s-master ~]# uname -a
Linux k8s-master 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 1922:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[ root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511(Core)
This article has prepared three machines for deploying the operating environment of k8s. The details are as follows:
Node and function | hostname | IP |
---|---|---|
master、etcd、registry | k8s-master | 10.211.55.6 |
node1 | k8s-node-1 | 10.211.55.7 |
node2 | k8s-node-2 | 10.211.55.8 |
Set the host names of the three machines: Execute on master:
1 | [ root@localhost ~]# hostnamectl --static set-hostname k8s-master |
---|
Execute on node1:
1 | [ root@localhost ~]# hostnamectl --static set-hostname k8s-node-1 |
---|
Execute on node2:
1 | [ root@localhost ~]# hostnamectl --static set-hostname k8s-node-2 |
---|
To set up hosts on the three machines, execute the following commands:
echo '10.211.55.6 k8s-master
10.211.55.6 etcd
10.211.55.6 registry
10.211.55.7 k8s-node-110.211.55.8 k8s-node-2' >>/etc/hosts
systemctl disable firewalld.service
systemctl stop firewalld.service
K8s depends on etcd to run, and etcd needs to be deployed first. This article uses yum to install:
1 | # yum install -y etcd |
---|
The default configuration file of etcd installed by yum is in /etc/etcd/etcd.conf
. Edit the configuration file and change the following information:
ETCD_NAME=master
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
Start and verify status
# systemctl start etcd
# systemctl enable etcd
# etcdctl set testdir/testkey0 00
# etcdctl get testdir/testkey0
0
# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
Extension: Etcd cluster deployment see——http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html
1 | [ root@k8s-master ~]# yum install -y docker |
---|
Make it allow to pull images from the registry. Add the following line: OPTIONS='–insecure-registry registry:5000'
[ root@k8s-master ~]# vim /etc/sysconfig/docker
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'if[-z "${DOCKER_CERT_PATH}"]; then
DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'
cp -n /lib/systemd/system/docker.service /etc/systemd/system/docker.service
sed -i "s|ExecStart=/usr/bin/dockerd-current|ExecStart=/usr/bin/dockerd-current --registry-mirror=<your accelerate address>|g"/etc/systemd/system/docker.service
systemctl daemon-reload
systemctl restart docker.service
# systemctl enable docker.service
# systemctl restart docker.service
1 | [ root@k8s-master ~]# yum install -y kubernetes |
---|
docker pull registry:2
// Associate the registry data volume with the local for easy management and backup of registry data
docker run -d -p 5000:5000 --name registry -v /data/registry:/var/lib/registry registry:2
The following components need to be run on kubernetes master:
Correspondingly, change the color information in the following configurations:
/etc/kubernetes/apiserver
KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0”
KUBE_API_PORT=”–port=8080”
KUBE_ETCD_SERVERS=”–etcd-servers=http://etcd:2379“
KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”
/etc/kubernetes/config
1 | KUBE_MASTER="–master=http://k8s-master:8080" |
---|
# systemctl enable kube-apiserver.service
# systemctl start kube-apiserver.service
# systemctl enable kube-controller-manager.service
# systemctl start kube-controller-manager.service
# systemctl enable kube-scheduler.service
# systemctl start kube-scheduler.service
See master's docker installation steps
See master's kubernets installation steps
The following components need to be run on the kubernetes node:
Correspondingly, you need to change the following configuration information:
1 | KUBE_MASTER="–master=http://k8s-master:8080" |
---|
KUBELET_ADDRESS="–address=0.0.0.0"
KUBELET_HOSTNAME="–hostname-override=k8s-node-1"(Note that the second station needs to write k8s-node-2)
KUBELET_API_SERVER="–api-servers=http://k8s-master:8080"
View the nodes and node status in the cluster on the master
# kubectl -s http://k8s-master:8080get node
NAME STATUS AGE
k8s-node-1 Ready 3m
k8s-node-2 Ready 16s
# kubectl get nodes
NAME STATUS AGE
k8s-node-1 Ready 3m
k8s-node-2 Ready 43s
So far, a kubernetes cluster has been built.
Execute the following commands on master and node to install
1 | # yum install -y flannel |
---|
Edit /etc/sysconfig/flanneld on both master and node, modify the following configuration:
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
Flannel uses Etcd for configuration to ensure the configuration consistency between multiple Flannel instances, so the following configuration is required on etcd:
# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'{"Network":"10.0.0.0/16"}
After starting Flannel, you need to restart docker and kubernete in turn. Execute in master:
systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
Execute on node:
systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
Flannel is considered the simplest network in k8s. Here you can find an article to help you understand Flannel network.
# docker pull nginx #Pull an nginx mirror from the external network registry
# docker tag nginx registry:5000/nginx #Tag the local mirror
# docker push registry:5000/nginx #Push to the local registry
# docker rmi registry:5000/nginx #Delete local mirror
cat << EOF >nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas:2
template:
metadata:
labels:
app: nginx
spec:
containers:- name: nginx
image: registry:5000/nginx
ports:- containerPort:80
resources:
requests:
cpu: 400m
EOF
# kubectl create -f nginx.yaml #Create nginx-dpmt deployment
cat << EOF >nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx-svc
spec:
type: NodePort
selector:
app: nginx
ports:- port:80
targetPort:80
nodePort:30088
EOF
# kubectl create -f nginx-svc.yaml #Create nginx-svc service
# kubectl describe service nginx-svc
Name: nginx-svc
Namespace:default
Labels: app=nginx-svc
Selector: app=nginx
Type: NodePort
IP:10.254.53.185
Port:<unset>80/TCP
NodePort:<unset>30088/TCP
Endpoints:10.0.19.2:80,10.0.4.2:80
Session Affinity: None
No events.
# curl http://k8s-node-1:30088/ #Test nginx service through nodePort
Two problems were encountered during the test:
The name of the
spec.selector.app in the nginx-svc.yaml
file is inconsistent with the spec.template.metadata.labels.app
in the nginx.yaml
, which results in the inability to access the service through the NodePort.