Binary installation of k8s cluster (1)-opening

This article is a series that introduces how to manually install a k8s cluster in a binary way. The purpose is to better understand and learn k8s. For the purpose of learning and understanding, each component here is manually installed step by step for practice and understanding. For the production environment, if the host is in your own data center, please ask professional infrastructure architects and devops teams to build a production-level cluster. If you are building a production environment on a public cloud (AWS/GCP/Azure/Alibaba Cloud/Tencent Cloud), please read the relevant documents carefully, and submit a ticket in time for unclear questions.

There are many ways to install a k8s cluster, which can be created and installed based on the kubeadmin tool, or installed component by component. If it is a public cloud environment, it can be created and installed based on the console UI or command line. As mentioned above, this is mainly for the purpose of learning, so we use binary to manually install one component by component. Regardless of how to create a k8s cluster, the following items will be considered.

  1. Container: Containers are basically docker at present, of course, the container is not only docker. There are also many implementations of containers, such as podman (it is said that centos-8/redhat-8 will be pre-installed), and commercial containers of pivotal (used in its pivotal cloud foundry pass). Of course, the general principle of container implementation is not elaborated here. Interested students can study linux namespace/cgroup/ufs, and they will have an understanding and understanding of the general principle of containers.

  2. K8s basic components: storage component etcd, master component api-server, controller-manager, kube-scheduler, worker component kubelet, kube-proxy, client tool kubectl.

  3. Network communication between containers: The network communication between containers is basically divided into two types, underlay and overlay. In the underlay mode, there is no additional packet in the communication process, and the host of the container is usually used as a route to realize data packet forwarding. Common implementation schemes are flannel host-gw method and calico bgp method.

Overlay mode has additional packets in the communication process, such as flannel vxlan mode (build a two-layer network in a three-layer network, that is, encapsulate eth ether packets in udp packets), calico ipip mode (encapsulate ip packets again in ip packets) . There is also the flannel udp method, which encapsulates the ip package in the upd package (of course, this method uses the tun device, and each communication involves the switch from the user mode to the kernel mode, so the efficiency is not high, and it will basically not be used. However. It is possible to start as a study).

In addition to the flannel and calico schemes, there are also weave, ovs and other schemes. K8s does not define the network scheme itself. After all, there are many situations in network implementation, and the complexity of different scenarios is not the same, and it is not easy to be dead all at once. I feel that it is a better choice for k8s to open up the network solutions to different communities. First, there are different k8s network solutions, which are not a monopoly, which is conducive to the development of technology and community. Second, it also allows users to follow their own reality. There are different options for the situation.

Students who are interested in this piece can learn more about linux bridge, veth-pair, route-table, arp-table, iptable, nat, fdb, tun&tap device, ipip-tunnel, vxlan, bgp protocol, l2-miss, l3- Basic knowledge such as miss and k8s network solutions are built on these basic knowledge. I won’t go into the details here, and I can write a series if I expand it.

  1. Mirror image warehouse: the repo used to store the image. Currently, harbor (vmware open source) and nexus (needless to say) are selected, but harbor is dedicated to image repo, and nexus supports image repo. Of course, if it is a self-built private repo, you must also consider volume and house-keeping (you can't just leave it alone). For open source solutions, you can consider nsf to mount the volume in the short term, and the ceph cluster to mount the volume in the long term. Of course, you can use nas if you have money.

  2. Container dns: Of course, the communication between containers uses fqdn (containers are dynamically created and destroyed, the IP address will definitely change, it is impossible to use ip. Besides, ip is so difficult to remember and not convenient to use), since there is fqdn, Then there must be a dns service. From the various versions of k8s, there have been different dns, such as the earliest sky-dns, the later kube-dns, and the latest coredns.

  3. Visual dashboard: It is generally used to present the resources in the k8s cluster in the form of a UI console and provide some basic operations on the resources. This is generally implemented by kube-dashboard.

  4. External access to the services provided by the container: Service deployment to the cluster must be called from outside the cluster (a bit nonsense, is it not called from outside, can it only call each other inside)? Generally, there are node port mode and ingress mode. In the public cloud environment, different vendors also provide different load balancer modes. Of course, in the node port mode, define the type and port in the service. Students who are interested in the load balancer mode can check the doc provided by different public cloud vendors. For ingress, nginx-ingress, traefik ingress, haproxy ingress are more commonly used.

  5. Mirror release management: With the mirror, it needs to be released and deployed to the cluster. The most primitive commands can be used, but it is more laborious. Of course, there is already a tool helm, which makes it a lot more convenient for us. Helm also includes client, tiller server and charts repo (store k8s application package).

  6. Persistent storage: Generally, applications in k8s are stateless, but storing data is unavoidable. k8s provides us with a persistent volume mechanism. Of course, we can use nfs, ceph, nas and other storage at the bottom layer.

  7. Monitoring and early warning: Any application definitely needs monitoring and early warning, and there are more programs in k8s. For example, prometheus+grafana, telegraf+grafana+influxdb, etc., this is also a very big topic, and I will not expand it here for the time being. It can be expanded to have a series.

  8. Log collection: In summary, there are basically two ways to collect logs, sidecar and daemonset. The sidecar method requires the deployment of a log collection agent in each pod, while the daemonset method requires the deployment of the log collection agent daemonset in the cluster, which does not require a container for the log collection agent in each pod. Of course, there are many log collection schemes. Filebeat/fluentd/logstash (logstash is heavier and generally not used) is the agent, es is the storage, and kabina is the view (ELK/EFK). You can also choose splunk if you have money. Of course, it is more than just providing log collection and search functions.

The components we used for this installation are as follows:

d): helmpush (version 0.7.1, used to push the k8s application package to the priavte charts repo chartmuseum)

Environment introduction

  1. OS:oracle virtual box, centos-7

  2. Master: 1 vm, 172.20.11.41
    ( Of course, there is no master HA, interested students can take a look at nginx/haproxy + keepalived)

  3. Worker: 2 VMs
    172.20.11.42 /172.20.11.43

  4. Master installation:

  1. Worker installation:
  1. Basic service installation in the cluster:

I will write here first, and in the next article we will start to introduce the production of ssl certificates.

Recommended Posts

Binary installation of k8s cluster (1)-opening
CentOS7.3.1611 deploys k8s1.5.2 cluster
Centos7 mqtt cluster installation
Graphical installation of CentOS8
CentOS7 deploys k8s cluster
Rapid deployment of Kubernetes (k8s) cluster in CentOS7 environment
Glusterfs cluster installation on Centos7
Redis cluster installation under CentOS
CentOS 7 Galera Cluster installation guide
Ubuntu16.04.5LTS installation process of SVN
Python version of OpenCV installation
CentOS environment installation of Docker
Centos7 install k8s cluster 1.15.0 version