**1. Why is k8s v1.16.0? **
I tried the latest version of v1.16.2, but the installation has not been completed. After the installation to the kubeadm init step, many errors were reported, such as: node xxx not found. Centos7 has been reinstalled several times, but still cannot be resolved. It took a day to install it and almost gave up. Later, the installation tutorials found on the Internet are basically v1.16.0. I don't believe it is a pit of v1.16.2, so I didn't plan to downgrade to v1.16.0. I tried to install the v1.16.0 version if I had no choice but it was successful. Record it here to prevent latecomers from stepping on the pit.
In this article, the installation steps are as follows:
There is an important step here. Please remember the ip for communication between your master and node. For example, the ip of my master is 192.168.99.104, and the ip of node is: 192.168.99.105. Please make sure to use these two ips between master and node Can ping each other successfully, the master's ip 192.168.99.104 will need to be used when configuring k8s next.
My environment:
All machines where k8s is installed need to install docker, the command is as follows:
# Tools required to install docker
yum install -y yum-utils device-mapper-persistent-data lvm2
# Configure Alibaba Cloud's docker source
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Specify to install this version of docker-ce
yum install -y docker-ce-18.09.9-3.el7
# Start docker
systemctl enable docker && systemctl start docker
The machine to install k8s needs 2 CPUs and 2g of memory or more. This is simple, just configure it in the virtual machine. Then execute the following script to do some preparatory operations. All machines with k8s installed need this step.
# Turn off the firewall
systemctl disable firewalld
systemctl stop firewalld
# Close selinux
# Temporarily disable selinux
setenforce 0
# Permanently close modification/etc/sysconfig/selinux file settings
sed -i 's/SELINUX=permissive/SELINUX=disabled/'/etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g"/etc/selinux/config
# Disable swap partition
swapoff -a
# Permanently disable, open/etc/fstab commented out the swap line.
sed -i 's/.*swap.*/#&/'/etc/fstab
# Modify kernel parameters
cat <<EOF >/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables =1
net.bridge.bridge-nf-call-iptables =1
EOF
sysctl --system
If you haven't installed docker yet, please refer to the second step of this article to install docker-ce 18.09.9 (all machines). If you have not set the k8s environment preparation conditions, please refer to step 3 of this article Set k8s environment preparation conditions (all machines)
to execute. After checking the above two steps, continue with the following steps.
Since the official k8s source is in google and cannot be accessed in China, the Alibaba Cloud yum source is used here
# Perform configuration k8s Alibaba Cloud source
cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[ kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# Install kubeadm, kubectl, kubelet
yum install -y kubectl-1.16.0-0 kubeadm-1.16.0-0 kubelet-1.16.0-0
# Start kubelet service
systemctl enable kubelet && systemctl start kubelet
The following command starts to install the docker image needed by k8s. Because foreign websites cannot be accessed, this command uses the domestic Aliyun source (registry.aliyuncs.com/google_containers). **Another very important thing is: the --apiserver-advertise-address here uses the ip that can ping each other between the master and node, my here is 192.168.99.104, I was scammed here for one night at first, you Please modify the ip execution yourself. ** When this command is executed, it will be stuck in [preflight] You can also perform this action in beforehand using''kubeadm config images pull
, it takes about 2 minutes, please be patient.
# Download the 6 docker images used in the management node, you can use docker images to view
# It takes about two minutes to wait here and it will get stuck[preflight] You can also perform this action in beforehand using ''kubeadm config images pull
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0--apiserver-advertise-address 192.168.99.104--pod-network-cidr=10.244.0.0/16--token-ttl 0
After the above installation is complete, you will be prompted to enter the following command, copy and paste it, and execute it.
# After the above installation is complete, k8s will prompt you to enter the following command to execute
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
After the above kubeadm init is successfully executed, it will return you the command for node node to join the cluster. It will be executed on the node node later, and you need to save it. If you forget it, you can use the following command to get it.
kubeadm token create --print-join-command
Above, the master node is installed. You can use kubectl get nodes
to check, the master is in NotReady state at this time, don't worry about it for now.
image
If you haven't installed docker yet, please refer to the second step of this article to install docker-ce 18.09.9 (all machines). If you have not set the k8s environment preparation conditions, please refer to step 3 of this article Set k8s environment preparation conditions (all machines)
to execute. After checking the above two steps, continue with the following steps.
# Perform configuration k8s Alibaba Cloud source
cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[ kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# Install kubeadm, kubectl, kubelet
yum install -y kubeadm-1.16.0-0 kubelet-1.16.0-0
# Start kubelet service
systemctl enable kubelet && systemctl start kubelet
The command to join the cluster here is different for everyone. You can log in to the master node and use kubeadm token create --print-join-command
to get it. The execution is as follows after obtaining.
# Join the cluster, if you don’t know the command to join the cluster, you can log in to the master node and use kubeadm token create--print-join-command to get
kubeadm join 192.168.99.104:6443--token ncfrid.7ap0xiseuf97gikl \
- - discovery-token-ca-cert-hash sha256:47783e9851a1a517647f1986225f104e81dbfd8fb256ae55ef6d68ce9334c6a2
After joining successfully, you can use the kubectl get nodes
command on the master node to view the joined nodes.
After the above steps are installed, the machine is set up, but the state is still NotReady, as shown in the figure below, the master machine needs to be installed flanneld.
20191101095214. png
Use the wget command, the address is: (https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml), this address is not accessible in China, so I copied the content to avoid the previous article It's too long, I pasted it into the eighth step Appendix
at the end of the article. This yml configuration file is configured with a domestically inaccessible address (quay.io). I have changed it to a domestically accessible address (quay-mirror.qiniu.com). We create a new kube-flannel.yml file, copy and paste the content.
kubectl apply -f kube-flannel.yml
At this point, the k8s cluster is set up, and the nodes in the following figure are in the Ready state, and you are done.
20191101101725. png
This is the content of the kube-flannel.yml file. All the inaccessible addresses (quay.io) have been changed to addresses that can be accessed in China (quay-mirror.qiniu.com). We create a new kube-flannel.yml file, copy and paste the content.
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged:false
volumes:- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:- pathPrefix:"/etc/cni/net.d"- pathPrefix:"/etc/kube-flannel"- pathPrefix:"/run/flannel"
readOnlyRootFilesystem:false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation:false
defaultAllowPrivilegeEscalation:false
# Capabilities
allowedCapabilities:['NET_ADMIN']
defaultAddCapabilities:[]
requiredDropCapabilities:[]
# Host namespaces
hostPID:false
hostIPC:false
hostNetwork:true
hostPorts:- min:0
max:65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule:'RunAsAny'---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:- apiGroups:['extensions']
resources:['podsecuritypolicies']
verbs:['use']
resourceNames:['psp.flannel.unprivileged']- apiGroups:-""
resources:- pods
verbs:-get- apiGroups:-""
resources:- nodes
verbs:- list
- watch
- apiGroups:-""
resources:- nodes/status
verbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json:|{"name":"cbr0","cniVersion":"0.3.1","plugins":[{"type":"flannel","delegate":{"hairpinMode":true,"isDefaultGateway":true}},{"type":"portmap","capabilities":{"portMappings":true}}]}
net-conf.json:|{"Network":"10.244.0.0/16","Backend":{"Type":"vxlan"}}---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/os
operator: In
values:- linux
- key: beta.kubernetes.io/arch
operator: In
values:- amd64
hostNetwork:true
tolerations:- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
command:- cp
args:--f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:- name: cni
mountPath:/etc/cni/net.d
- name: flannel-cfg
mountPath:/etc/kube-flannel/
containers:- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
command:-/opt/bin/flanneld
args:---ip-masq
- - - kube-subnet-mgr
resources:
requests:
cpu:"100m"
memory:"50Mi"
limits:
cpu:"100m"
memory:"50Mi"
securityContext:
privileged:false
capabilities:
add:["NET_ADMIN"]
env:- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:- name: run
mountPath:/run/flannel
- name: flannel-cfg
mountPath:/etc/kube-flannel/
volumes:- name: run
hostPath:
path:/run/flannel
- name: cni
hostPath:
path:/etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/os
operator: In
values:- linux
- key: beta.kubernetes.io/arch
operator: In
values:- arm64
hostNetwork:true
tolerations:- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
command:- cp
args:--f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:- name: cni
mountPath:/etc/cni/net.d
- name: flannel-cfg
mountPath:/etc/kube-flannel/
containers:- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
command:-/opt/bin/flanneld
args:---ip-masq
- - - kube-subnet-mgr
resources:
requests:
cpu:"100m"
memory:"50Mi"
limits:
cpu:"100m"
memory:"50Mi"
securityContext:
privileged:false
capabilities:
add:["NET_ADMIN"]
env:- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:- name: run
mountPath:/run/flannel
- name: flannel-cfg
mountPath:/etc/kube-flannel/
volumes:- name: run
hostPath:
path:/run/flannel
- name: cni
hostPath:
path:/etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/os
operator: In
values:- linux
- key: beta.kubernetes.io/arch
operator: In
values:- arm
hostNetwork:true
tolerations:- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
command:- cp
args:--f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:- name: cni
mountPath:/etc/cni/net.d
- name: flannel-cfg
mountPath:/etc/kube-flannel/
containers:- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
command:-/opt/bin/flanneld
args:---ip-masq
- - - kube-subnet-mgr
resources:
requests:
cpu:"100m"
memory:"50Mi"
limits:
cpu:"100m"
memory:"50Mi"
securityContext:
privileged:false
capabilities:
add:["NET_ADMIN"]
env:- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:- name: run
mountPath:/run/flannel
- name: flannel-cfg
mountPath:/etc/kube-flannel/
volumes:- name: run
hostPath:
path:/run/flannel
- name: cni
hostPath:
path:/etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/os
operator: In
values:- linux
- key: beta.kubernetes.io/arch
operator: In
values:- ppc64le
hostNetwork:true
tolerations:- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
command:- cp
args:--f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:- name: cni
mountPath:/etc/cni/net.d
- name: flannel-cfg
mountPath:/etc/kube-flannel/
containers:- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
command:-/opt/bin/flanneld
args:---ip-masq
- - - kube-subnet-mgr
resources:
requests:
cpu:"100m"
memory:"50Mi"
limits:
cpu:"100m"
memory:"50Mi"
securityContext:
privileged:false
capabilities:
add:["NET_ADMIN"]
env:- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:- name: run
mountPath:/run/flannel
- name: flannel-cfg
mountPath:/etc/kube-flannel/
volumes:- name: run
hostPath:
path:/run/flannel
- name: cni
hostPath:
path:/etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:- matchExpressions:- key: beta.kubernetes.io/os
operator: In
values:- linux
- key: beta.kubernetes.io/arch
operator: In
values:- s390x
hostNetwork:true
tolerations:- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
command:- cp
args:--f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:- name: cni
mountPath:/etc/cni/net.d
- name: flannel-cfg
mountPath:/etc/kube-flannel/
containers:- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
command:-/opt/bin/flanneld
args:---ip-masq
- - - kube-subnet-mgr
resources:
requests:
cpu:"100m"
memory:"50Mi"
limits:
cpu:"100m"
memory:"50Mi"
securityContext:
privileged:false
capabilities:
add:["NET_ADMIN"]
env:- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:- name: run
mountPath:/run/flannel
- name: flannel-cfg
mountPath:/etc/kube-flannel/
volumes:- name: run
hostPath:
path:/run/flannel
- name: cni
hostPath:
path:/etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
Recommended Posts