Tungsten Fabric Knowledge Base丨Supplement on OpenStack, K8s, CentOS installation issues

Author: Tatsuya Naganawa Translator: TF compilation group

Multi-kube-master deployment

3 A Tungsten Fabric controller node: m3.xlarge (4 vcpu) -> c3.4xlarge (16 vcpu) (Because schema-transformer requires cpu resources for acl calculation, I need to add resources)
100 kube-master, 800 workers: m3.medium

In the following link, the tf-controller installation and first-containers.yaml are the same

Ami is also the same (ami-3185744e), but the kernel version is updated through yum -y update kernel (converted to an image and used to start the instance)

/tmp/aaa.pem is the key pair specified in the ec2 instance

Attached cni.yaml file:

( Type a command in one of the Tungsten Fabric controller nodes)
yum -y install epel-release
yum -y install parallel

aws ec2 describe-instances --query 'Reservations[*].Instances[*].PrivateIpAddress'--output text | tr '\t''\n'>/tmp/all.txt
head -n 100/tmp/all.txt > masters.txt
tail -n 800/tmp/all.txt > workers.txt

ulimit -n 4096
cat all.txt | parallel -j1000 ssh -i /tmp/aaa.pem -o StrictHostKeyChecking=no centos@{} id
cat all.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo sysctl -w net.bridge.bridge-nf-call-iptables=1-
cat -n masters.txt | parallel -j1000 -a ---colsep '\t' ssh -i /tmp/aaa.pem centos@{2} sudo kubeadm init --token aaaaaa.aaaabbbbccccdddd --ignore-preflight-errors=NumCPU --pod-network-cidr=10.32.{1}.0/24--service-cidr=10.96.{1}.0/24--service-dns-domain=cluster{1}.local
-
vi assign-kube-master.py
computenodes=8withopen('masters.txt')as aaa:withopen('workers.txt')as bbb:for masternode in aaa.read().rstrip().split('\n'):for i inrange(computenodes):
 tmp=bbb.readline().rstrip()print("{}\t{}".format(masternode, tmp))-
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo cp /etc/kubernetes/admin.conf /tmp/admin.conf
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo chmod 644/tmp/admin.conf
cat -n masters.txt | parallel -j1000 -a ---colsep '\t' scp -i /tmp/aaa.pem centos@{2}:/tmp/admin.conf kubeconfig-{1}-
cat -n masters.txt | parallel -j1000 -a ---colsep '\t' kubectl --kubeconfig=kubeconfig-{1}get node
-
cat -n join.txt | parallel -j1000 -a ---colsep '\t' ssh -i /tmp/aaa.pem centos@{3} sudo kubeadm join {2}:6443--token aaaaaa.aaaabbbbccccdddd --discovery-token-unsafe-skip-ca-verification
- ( modify controller-ip in cni-tungsten-fabric.yaml)
cat -n masters.txt | parallel -j1000 -a ---colsep '\t' cp cni-tungsten-fabric.yaml cni-{1}.yaml
cat -n masters.txt | parallel -j1000 -a ---colsep '\t' sed -i -e "s/k8s2/k8s{1}/"-e "s/10.32.2/10.32.{1}/"-e "s/10.64.2/10.64.{1}/"-e "s/10.96.2/10.96.{1}/"-e "s/172.31.x.x/{2}/" cni-{1}.yaml
-
cat -n masters.txt | parallel -j1000 -a ---colsep '\t' kubectl --kubeconfig=kubeconfig-{1} apply -f cni-{1}.yaml
-
sed -i 's!kubectl!kubectl --kubeconfig=/etc/kubernetes/admin.conf!'set-label.sh 
cat masters.txt | parallel -j1000 scp -i /tmp/aaa.pem set-label.sh centos@{}:/tmp
cat masters.txt | parallel -j1000 ssh -i /tmp/aaa.pem centos@{} sudo bash /tmp/set-label.sh
-
cat -n masters.txt | parallel -j1000 -a ---colsep '\t' kubectl --kubeconfig=kubeconfig-{1} create -f first-containers.yaml

Nested installation of Kubernetes on OpenStack

You can try nested installation of kubernetes on the all-in-one openstack node.

After the node is installed by ansible-deployer,

In addition, you need to manually create a local service for vRouter TCP/9091 connection

This configuration will create DNAT/SNAT, for example, from src: 10.0.1.3:xxxx, dst-ip: 10.1.1.11:9091 to src: compute's vhost0 ip:xxxx dst-ip: 127.0.0.1:9091, so in the openstack VM CNI can directly communicate with the vrouter-agent on the computing node and select the port/ip information for the container.

On this node, two Centos7 (or ubuntu bionic) nodes will be created, and the kubernetes cluster will be installed using the same procedure (see link below),

Of course, yaml files need to be installed nested.

. /resolve-manifest.sh contrail-nested-kubernetes.yaml > cni-tungsten-fabric.yaml

KUBEMANAGER_NESTED_MODE:"{{ KUBEMANAGER_NESTED_MODE }}" ## this needs to be "1"
KUBERNESTES_NESTED_VROUTER_VIP:{{ KUBERNESTES_NESTED_VROUTER_VIP }} ## this parameter needs to be the same IP with the one defined in link-local service(such as10.1.1.11)

If coredns receives the ip, the nested installation is normal.

vRouter ml2 plugin

I tried the ml2 function of the vRouter neutron plugin.

Use three CentOS7.5 (4 cpu, 16 GB memory, 30 GB disk, ami: ami-3185744e) on AWS.

Attached are steps based on this document.

openstack-controller:172.31.15.248
tungsten-fabric-controller(vRouter):172.31.10.212
nova-compute(ovs):172.31.0.231(Command at tungsten-fabric-On the controller, use centos user (not root user))

sudo yum -y remove PyYAML python-requests
sudo yum -y install git patch
sudo easy_install pip
sudo pip install PyYAML requests ansible==2.8.8
ssh-keygen
 add id_rsa.pub to authorized_keys on all three nodes(centos user(not root))

git clone https://opendev.org/x/networking-opencontrail.git
cd networking-opencontrail
patch -p1 < ml2-vrouter.diff 

cd playbooks
cp -i hosts.example hosts
cp -i group_vars/all.yml.example group_vars/all.yml(ssh to all the nodes once, to update known_hosts)

ansible-playbook main.yml -i hosts

 - devstack logs are located at/opt/stack/logs/stack.sh.log
 - The openstack process log is written in/var/log/messages
 - ' systemctl list-unit-files | grep devstack'Show systemctl entries for openstack processes(openstack controller node)
 Once devstack fails due to mariadb login error, type this command to fix it. (For the ip and fqdn of the openstack controller, the last two lines need to be modified)
 The command will be typed by the &quot;centos&quot; user (not the root user).
 mysqladmin -u root password admin
 mysql -uroot -padmin -h127.0.0.1-e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''%'\'' identified by '\''admin'\'';'
 mysql -uroot -padmin -h127.0.0.1-e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''172.31.15.248'\'' identified by '\''admin'\'';'
 mysql -uroot -padmin -h127.0.0.1-e 'GRANT ALL PRIVILEGES ON *.* TO '\''root'\''@'\''ip-172-31-15-248.ap-northeast-1.compute.internal'\'' identified by '\''admin'\'';'

The codes for hosts, group_vars/all and patch are attached below. (Some are just bug fixes, but some will change the default behavior)

[ centos@ip-172-31-10-212 playbooks]$ cat hosts
controller ansible_host=172.31.15.248 ansible_user=centos

# This host should be one of the computing host group.
# The playbook for separately deploying Tungsten Fabtic compute nodes is not yet ready.
contrail_controller ansible_host=172.31.10.212 ansible_user=centos local_ip=172.31.10.212[contrail]
contrail_controller

[ openvswitch]
other_compute ansible_host=172.31.0.231 local_ip=172.31.0.231 ansible_user=centos

[ compute:children]
contrail
openvswitch
[ centos@ip-172-31-10-212 playbooks]$ cat group_vars/all.yml
---
# IP address forOpenConrail(e.g.192.168.0.2)
contrail_ip:172.31.10.212

# Gateway address forOpenConrail(e.g.192.168.0.1)
contrail_gateway:

# Interface nameforOpenConrail(e.g. eth0)
contrail_interface:

# IP address for OpenStack VM(e.g.192.168.0.3)
openstack_ip:172.31.15.248

# OpenStack branch used on the VM.
openstack_branch: stable/queens

# You can also choose to use other plug-in versions (the default is an OpenStack branch)
networking_plugin_version: master

# Tungsten Fabric docker image tag for contrail-ansible-deployer
contrail_version: master-latest

# If true, use the Tungsten Fabric driver to install the network_bgpvpn plugin
install_networking_bgpvpn_plugin:false

# If true, integrate with device manager (will start) and vRouter
# The packaging priority will be set to'VXLAN,MPLSoUDP,MPLSoGRE'.
dm_integration_enabled:false

# Optional path for files with DM integration topology. After setting up and enabling DM integration, topology.yaml file will be copied to this location
dm_topology_file:

# If true, the instance password created by the current ansible user will be set to instance_password value
change_password:false
# instance_password: uberpass1

# If it is set, please use this data to overwrite the docker daemon/etc config file
# docker_config:[centos@ip-172-31-10-212 playbooks]$ 

[ centos@ip-172-31-10-212 networking-opencontrail]$ cat ml2-vrouter.diff 
diff --git a/playbooks/roles/contrail_node/tasks/main.yml b/playbooks/roles/contrail_node/tasks/main.yml
index ee29b05..272ee47 100644--- a/playbooks/roles/contrail_node/tasks/main.yml
+++ b/playbooks/roles/contrail_node/tasks/main.yml
@@ -7,7+7,6 @@
  - epel-release
  - gcc
  - git
- - ansible-2.4.*- yum-utils
  - libffi-devel
  state: present
@@ -61,20+60,20 @@
  chdir:~/contrail-ansible-deployer/
  executable:/bin/bash

- - name: Generate ssh key for provisioning other nodes
- openssh_keypair:-    path:~/.ssh/id_rsa
-  state: present
- register: contrail_deployer_ssh_key
- - - name: Propagate generated key
- authorized_key:-    user:"{{ ansible_user }}"-    state: present
-  key:"{{ contrail_deployer_ssh_key.public_key }}"-  delegate_to:"{{ item }}"-  with_items:"{{ groups.contrail }}"-  when: contrail_deployer_ssh_key.public_key
+#- name: Generate ssh key for provisioning other nodes
+# openssh_keypair:+#    path:~/.ssh/id_rsa
+# state: present
+# register: contrail_deployer_ssh_key
+#
+#- name: Propagate generated key
+# authorized_key:+#    user:"{{ ansible_user }}"+#    state: present
+# key:"{{ contrail_deployer_ssh_key.public_key }}"+#  delegate_to:"{{ item }}"+#  with_items:"{{ groups.contrail }}"+#  when: contrail_deployer_ssh_key.public_key

 - name: Provision Node before deploy contrail
 shell:|
@@ -105,4+104,4 @@
  sleep:5
  host:"{{ contrail_ip }}"
  port:8082-    timeout:300
\ No newline at end of file
+ timeout:300
diff --git a/playbooks/roles/contrail_node/templates/instances.yaml.j2 b/playbooks/roles/contrail_node/templates/instances.yaml.j2
index e3617fd..81ea101 100644--- a/playbooks/roles/contrail_node/templates/instances.yaml.j2
+++ b/playbooks/roles/contrail_node/templates/instances.yaml.j2
@@ - 14,6+14,7 @@ instances:
  config_database:
  config:
  control:+      analytics:
  webui:{%if"contrail_controller"in groups["contrail"]%}
  vrouter:
diff --git a/playbooks/roles/docker/tasks/main.yml b/playbooks/roles/docker/tasks/main.yml
index 8d7971b..5ed9352 100644--- a/playbooks/roles/docker/tasks/main.yml
+++ b/playbooks/roles/docker/tasks/main.yml
@@ -6,7+6,6 @@
  - epel-release
  - gcc
  - git
- - ansible-2.4.*- yum-utils
  - libffi-devel
  state: present
@@ -62,4+61,4 @@
  - docker-py==1.10.6- docker-compose==1.9.0
  state: present
-  extra_args:--user
\ No newline at end of file
+ extra_args:--user
diff --git a/playbooks/roles/node/tasks/main.yml b/playbooks/roles/node/tasks/main.yml
index 0fb1751..d9ab111 100644--- a/playbooks/roles/node/tasks/main.yml
+++ b/playbooks/roles/node/tasks/main.yml
@@ -1,13+1,21 @@
 - - - - - name: Update kernel
+- name: Install required utilities
 become: yes
 yum:-    name: kernel
-  state: latest
- register: update_kernel
+ name:+- python3-devel
+- libibverbs  ## needed by openstack controller node
+ state: present

- - name: Reboot the machine
- become: yes
- reboot:-  when: update_kernel.changed
- register: reboot_machine
+#- name: Update kernel
+# become: yes
+# yum:+#    name: kernel
+# state: latest
+# register: update_kernel
+#
+#- name: Reboot the machine
+# become: yes
+# reboot:+#  when: update_kernel.changed
+# register: reboot_machine
diff --git a/playbooks/roles/restack_node/tasks/main.yml b/playbooks/roles/restack_node/tasks/main.yml
index a11e06e..f66d2ee 100644--- a/playbooks/roles/restack_node/tasks/main.yml
+++ b/playbooks/roles/restack_node/tasks/main.yml
@@ -9,7+9,7 @@
 become: yes
 pip:
  name:-- setuptools
+- setuptools==43.0.0- requests
  state: forcereinstall

[ centos@ip-172-31-10-212 networking-opencontrail]$

It takes about 50 minutes to complete the installation.

Although /home/centos/devstack/openrc can be used for "demo" user login, it requires administrator access to specify its network type (vRouter is empty and ovs is "vxlan"), so adminrc needs to be created manually.

[ centos@ip-172-31-15-248~]$ cat adminrc 
export OS_PROJECT_DOMAIN_ID=defaultexport OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=admin
export OS_IDENTITY_API_VERSION=3export OS_PASSWORD=admin
export OS_AUTH_TYPE=password
export OS_AUTH_URL=http://172.31.15.248/identity  ## this needs to be modified
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_VOLUME_API_VERSION=2[centos@ip-172-31-15-248~]$ 

openstack network create testvn
openstack subnet create --subnet-range 192.168.100.0/24--network testvn subnet1
openstack network create --provider-network-type vxlan testvn-ovs
openstack subnet create --subnet-range 192.168.110.0/24--network testvn-ovs subnet1-ovs

 - Created two virtual networks
[ centos@ip-172-31-15-248~]$ openstack network list
+- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +| ID                                   | Name       | Subnets                              |+--------------------------------------+------------+--------------------------------------+| d4e08516-71fc-401b-94fb-f52271c28dc9 | testvn-ovs | 991417ab-7da5-44ed-b686-8a14abbe46bb || e872b73e-100e-4ab0-9c53-770e129227e8 | testvn     | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 |+--------------------------------------+------------+--------------------------------------+[centos@ip-172-31-15-248~]$

 - testvn's provider:network_type is empty

[ centos@ip-172-31-15-248~]$ openstack network show testvn
+- - - - - - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +| Field                     | Value                                |+---------------------------+--------------------------------------+| admin_state_up            | UP                                   || availability_zone_hints   ||| availability_zones        | nova                                 || created_at                |2020-01-18T16:14:42Z                 || description               ||| dns_domain                | None                                 || id                        | e872b73e-100e-4ab0-9c53-770e129227e8 || ipv4_address_scope        | None                                 || ipv6_address_scope        | None                                 || is_default                | None                                 || is_vlan_transparent       | None                                 || mtu                       |1500|| name                      | testvn                               || port_security_enabled     | True                                 || project_id                | 84a573dbfadb4a198ec988e36c4f66f6     || provider:network_type     | local                                || provider:physical_network | None                                 || provider:segmentation_id  | None                                 || qos_policy_id             | None                                 || revision_number           |3|| router:external           | Internal                             || segments                  | None                                 || shared                    | False                                || status                    | ACTIVE                               || subnets                   | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 || tags                      ||| updated_at                |2020-01-18T16:14:44Z                 |+---------------------------+--------------------------------------+[centos@ip-172-31-15-248~]$ 

 - It is created in Tungsten Fabric&#39;s database(venv)[root@ip-172-31-10-212~]# contrail-api-cli --host 172.31.10.212 ls -l virtual-network
virtual-network/e872b73e-100e-4ab0-9c53-770e129227e8  default-domain:admin:testvn
virtual-network/5a88a460-b049-4114-a3ef-d7939853cb13  default-domain:default-project:dci-network
virtual-network/f61d52b0-6577-42e0-a61f-7f1834a2f45e  default-domain:default-project:__link_local__
virtual-network/46b5d74a-24d3-47dd-bc82-c18f6bc706d7  default-domain:default-project:default-virtual-network
virtual-network/52925e2d-8c5d-4573-9317-2c346fb9edf0  default-domain:default-project:ip-fabric
virtual-network/2b0469cf-921f-4369-93a7-2d73350c82e7  default-domain:default-project:_internal_vn_ipv6_link_local(venv)[root@ip-172-31-10-212~]# 

 - On the other hand, testvn-ovs's provider:network_The type is vxlan, and the segmentation ID and mtu are automatically specified

[ centos@ip-172-31-15-248~]$ openstack network show testvn
+- - - - - - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +| Field                     | Value                                |+---------------------------+--------------------------------------+| admin_state_up            | UP                                   || availability_zone_hints   ||| availability_zones        | nova                                 || created_at                |2020-01-18T16:14:42Z                 || description               ||| dns_domain                | None                                 || id                        | e872b73e-100e-4ab0-9c53-770e129227e8 || ipv4_address_scope        | None                                 || ipv6_address_scope        | None                                 || is_default                | None                                 || is_vlan_transparent       | None                                 || mtu                       |1500|| name                      | testvn                               || port_security_enabled     | True                                 || project_id                | 84a573dbfadb4a198ec988e36c4f66f6     || provider:network_type     | local                                || provider:physical_network | None                                 || provider:segmentation_id  | None                                 || qos_policy_id             | None                                 || revision_number           |3|| router:external           | Internal                             || segments                  | None                                 || shared                    | False                                || status                    | ACTIVE                               || subnets                   | 27d828eb-ada4-4113-a6f8-df7dde2c43a4 || tags                      ||| updated_at                |2020-01-18T16:14:44Z                 |+---------------------------+--------------------------------------+[centos@ip-172-31-15-248~]$ openstack network show testvn-ovs
+- - - - - - - - - - - - - - - - - - - - - - - - - - - +- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +| Field                     | Value                                |+---------------------------+--------------------------------------+| admin_state_up            | UP                                   || availability_zone_hints   ||| availability_zones        | nova                                 || created_at                |2020-01-18T16:14:47Z                 || description               ||| dns_domain                | None                                 || id                        | d4e08516-71fc-401b-94fb-f52271c28dc9 || ipv4_address_scope        | None                                 || ipv6_address_scope        | None                                 || is_default                | None                                 || is_vlan_transparent       | None                                 || mtu                       |1450|| name                      | testvn-ovs                           || port_security_enabled     | True                                 || project_id                | 84a573dbfadb4a198ec988e36c4f66f6     || provider:network_type     | vxlan                                || provider:physical_network | None                                 || provider:segmentation_id  |50|| qos_policy_id             | None                                 || revision_number           |3|| router:external           | Internal                             || segments                  | None                                 || shared                    | False                                || status                    | ACTIVE                               || subnets                   | 991417ab-7da5-44ed-b686-8a14abbe46bb || tags                      ||| updated_at                |2020-01-18T16:14:49Z                 |+---------------------------+--------------------------------------+[centos@ip-172-31-15-248~]$

CentOS 8 installation process

centos8.2
ansible-deployer is used

only python3 is used(no python2)-Requires ansible 2.8.x

1 x for tf-controller and kube-master,1vRouter(all nodes)
yum install python3 chrony
alternatives --set python /usr/bin/python3(vRouter nodes)
yum install network-scripts
 - This is necessary because vRouter does not currently support NetworkManager(ansible node)
sudo yum -y install git
sudo pip3 install PyYAML requests ansible\           
cirros-deployment-86885fbf85-tjkwn   1/1     Running   0          13s   10.47.255.249   ip-172-31-2-120.ap-northeast-1.compute.internal              
[ root@ip-172-31-7-20~]# 
[ root@ip-172-31-7-20~]# 
[ root@ip-172-31-7-20~]# kubectl exec -it cirros-deployment-86885fbf85-7z78k sh
/ # ip -o a
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
17: eth0    inet 10.47.255.250/12 scope global eth0\       valid_lft forever preferred_lft forever
/ # ping 10.47.255.249
PING 10.47.255.249(10.47.255.249):56 data bytes
64 bytes from10.47.255.249: seq=0 ttl=63 time=0.657 ms
64 bytes from10.47.255.249: seq=1 ttl=63 time=0.073 ms
^ C
- - - 10.47.255.249 ping statistics ---2 packets transmitted,2 packets received,0% packet loss
round-trip min/avg/max =0.073/0.365/0.657 ms
/ # 

 - In order for chrony to work normally after installing the router, it may be necessary to restart the chrony server

[ root@ip-172-31-4-206~]#  chronyc -n sources
210 Number of sources =5
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================^?169.254.169.123340906- 8687 ns[-12us]+/-  428us
^?129.250.35.2502701002+429 us[+428us]+/-   73ms
^?167.179.96.146270937+665 us[+662us]+/- 2859us
^?194.0.5.1232601129+477 us[+473us]+/-   44ms
^?103.202.216.35360933+9662 ns[+6618ns]+/-  145ms
[ root@ip-172-31-4-206~]# 
[ root@ip-172-31-4-206~]# 
[ root@ip-172-31-4-206~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
· chronyd.service - NTP client/server
 Loaded:loaded(/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
 Active:active(running) since Sun 2020-06-2816:00:34 UTC; 33min ago
  Docs: man:chronyd(8)
   man:chrony.conf(5)
 Main PID:727(chronyd)
 Tasks:1(limit:49683)
 Memory:2.1M
 CGroup:/system.slice/chronyd.service
   └─727/usr/sbin/chronyd

Jun 2816:00:33 localhost.localdomain chronyd[727]: Using right/UTC timezone to obtain leap second data
Jun 2816:00:34 localhost.localdomain systemd[1]: Started NTP client/server.
Jun 2816:00:42 localhost.localdomain chronyd[727]: Selected source 169.254.169.123
Jun 2816:00:42 localhost.localdomain chronyd[727]: System clock TAI offset set to 37 seconds
Jun 2816:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 167.179.96.146 offline
Jun 2816:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 103.202.216.35 offline
Jun 2816:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 129.250.35.250 offline
Jun 2816:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 194.0.5.123 offline
Jun 2816:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Source 169.254.169.123 offline
Jun 2816:19:33 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[727]: Can't synchronise: no selectable sources
[ root@ip-172-31-4-206~]# service chronyd restart
Redirecting to /bin/systemctl restart chronyd.service
[ root@ip-172-31-4-206~]# 
[ root@ip-172-31-4-206~]# 
[ root@ip-172-31-4-206~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
· chronyd.service - NTP client/server
 Loaded:loaded(/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
 Active:active(running) since Sun 2020-06-2816:34:41 UTC; 2s ago
  Docs: man:chronyd(8)
   man:chrony.conf(5)
 Process:25252 ExecStartPost=/usr/libexec/chrony-helper update-daemon(code=exited, status=0/SUCCESS)
 Process:25247 ExecStart=/usr/sbin/chronyd $OPTIONS(code=exited, status=0/SUCCESS)
 Main PID:25250(chronyd)
 Tasks:1(limit:49683)
 Memory:1.0M
 CGroup:/system.slice/chronyd.service
   └─25250/usr/sbin/chronyd

Jun 2816:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal systemd[1]: Starting NTP client/server...
Jun 2816:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: chronyd version 3.5starting(+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND>
Jun 2816:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: Frequency 35.298+/-0.039 ppm read from/var/lib/chrony/drift
Jun 2816:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal chronyd[25250]: Using right/UTC timezone to obtain leap second data
Jun 2816:34:41 ip-172-31-4-206.ap-northeast-1.compute.internal systemd[1]: Started NTP client/server.[root@ip-172-31-4-206~]#  chronyc -n sources
210 Number of sources =5
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================^*169.254.169.12334174- 2369 ns[-27us]+/-  451us
^- 94.154.96.726175+30 ms[+30ms]+/-  148ms
^- 185.51.192.3426173- 2951 us[-2951us]+/-  150ms
^- 188.125.64.626173+9526 us[+9526us]+/-  143ms
^- 216.218.254.20216175+15 ms[+15ms]+/-   72ms
[ root@ip-172-31-4-206~]# 

[ root@ip-172-31-4-206~]# contrail-status 
Pod      Service      Original Name           Original Version  State    Id            Status         
   rsyslogd                             nightly-master    running  5fc76e57c156  Up 16 minutes  
vrouter  agent        contrail-vrouter-agent  nightly-master    running  bce023d8e6e0  Up 5 minutes   
vrouter  nodemgr      contrail-nodemgr        nightly-master    running  9439a304cbcf  Up 5 minutes   
vrouter  provisioner  contrail-provisioner    nightly-master    running  1531b1403e49  Up 5 minutes   

WARNING: container with original name '' have Pod or Service empty. Pod:''/ Service:'rsyslogd'. Please pass NODE_TYPE with pod name to container's env

vrouter kernel module is PRESENT
== Contrail vrouter ==
nodemgr: active
agent: active

[ root@ip-172-31-4-206~]#

Original link:
https://github.com/tnaganawa/tungstenfabric-docs/blob/master/TungstenFabricKnowledgeBase.md

Recommended Posts

Tungsten Fabric Knowledge Base丨Supplement on OpenStack, K8s, CentOS installation issues
Build k8s1.9.9 on centos7
Glusterfs cluster installation on Centos7
Docker EE installation on centos7