Openstack G version Ubuntu13.04 three-node experiment record
special reminder:
This document refers to the official website documents (http://docs.openstack.org/), github(https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst), longgeek configuration documents (http://longgeek.com/2013/03/31/openstack-grizzly-multinode-deployment-in-ubuntu-12-04/), and also consulted many heroes of the openstack group. Thanks for this one by one!
Required equipment:
A physical machine with 8G memory, windows2003sp2 operating system, workstation9, ubuntu13.04 (64-bit) mirror
Network settings:
Control node:eth0(10.10.10.51),eth1(172.16.10.200)
Network node:eth0(10.10.10.52),eth1(10.20.20.52),eth2(172.16.10.201)
Computenode:eth0(10.10.10.55),eth1(10.10.20.55)
External network: 172.16.10.0/24 (Internet business technology external log in openstack)
Management network: 10.10.10.0/24 (communication between three nodes such as keystone) authentication, rabbitmqmessage queue
Business network: 10.20.20.0/24 (virtual machine data communication between network nodes and computing nodes, such as: dpcp, l2, l3)
Topology:
Note: Due to the virtual machine test, each virtual machine has 2G memory, my external network uses bridged network segments, and the management and business networks use vmnet2 and vmnet3 respectively. In addition, because the computing node does not have an external address, the software package cannot be downloaded. You can add a nat The network can be deleted after the installation is complete. There are other methods such as setting the gateway of the computing node to the ip of the network node in the official website document, and the network node nat agent computing node surfing the Internet. These do not affect the experimental results.
installation steps:
2 .Control node
2.1 Prepare ubuntu
Add grizzly source
apt-get install -y ubuntu-cloud-keyring
echo debhttp://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main>> /etc/apt/sources.list.d/grizzly.list
Update system
apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade –y
2.2 Network Configuration
auto eth0
iface eth0 inet static
address 10.10.10.51
netmask 255.255.255.0
Restart the networking service:
auto eth1
iface eth1 inet static
address 172.16.10.200
netmask 255.255.255.0
gateway 172.16.10.254
dns-nameservers 172.16.10.5
Restart network service
service networking restart
2.3 Install MySQL
Install MySQL:
apt-get install -y mysql-serverpython-mysqldb
Configure myasl to accept all requests
sed -i 's/127.0.0.1/0.0.0.0/g'/etc/mysql/my.cnf
service mysql restart
Create database
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystone'@'%'IDENTIFIED BY 'keystone';
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance'@'%'IDENTIFIED BY 'glance';
CREATE DATABASE quantum;
GRANT ALL ON quantum.* TO 'quantum'@'%'IDENTIFIED BY 'quantum';
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'nova'@'%'IDENTIFIED BY 'nova';
CREATE DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinder'@'%'IDENTIFIED BY 'cinder';
quit;
2.4 RabbitMQ
Install RabbitMQ:
apt-get install -y rabbitmq-server
Install NTP service:
apt-get install -y ntp
2.5. Others
Install other services:
apt-get install -y vlan bridge-utils
Enable IP_Forwarding:
sed -i's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl net.ipv4.ip_forward=1
2.6 Keystone
Install keystone
Modify the /etc/keystone/keystone.conf database configuration
connection =mysql://keystoneUser:[email protected]/keystone
Restart the keystone server and synchronize the database
service keystone restart
keystone-manage db_sync
Use the script to fill the database, you can download it from the Internet, you need to change the IP address Password according to your own situation, the function of the script is to create new tenants, users, service listening ports, etc. The download address is as follows:
The script content is as follows:
root@control:~# catkeystone_endpoints_basic.sh
#! /bin/sh
HOST_IP=10.10.10.51
EXT_HOST_IP=172.16.10.200
MYSQL_USER=keystone
MYSQL_DATABASE=keystone
MYSQL_HOST=$HOST_IP
MYSQL_PASSWORD=keystone
KEYSTONE_REGION=RegionOne
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"
while getopts"u:D:p:m:K:R:E:T:vh" opt; do
case $opt in
u)
MYSQL_USER=$OPTARG
;;
D)
MYSQL_DATABASE=$OPTARG
;;
p)
MYSQL_PASSWORD=$OPTARG
;;
m)
MYSQL_HOST=$OPTARG
;;
K)
MASTER=$OPTARG
;;
R)
KEYSTONE_REGION=$OPTARG
;;
E)
export SERVICE_ENDPOINT=$OPTARG
;;
T)
export SERVICE_TOKEN=$OPTARG
;;
v)
set -x
;;
h)
cat <<EOF
Usage: $0 [-m mysql_hostname] [-umysql_username] [-D mysql_database] [-p mysql_password]
[- K keystone_master ] [ -R keystone_region ] [ -E keystone_endpoint_url]
[ - T keystone_token ]
Add -v for verbose mode, -h to display thismessage.
EOF
exit 0
;;
?)
echo "Unknown option -$OPTARG" >&2
exit 1
;;
:)
echo "Option -$OPTARG requires an argument" >&2
exit 1
;;
esac
done
if [ -z "$KEYSTONE_REGION" ];then
echo "Keystone region not set. Please set with -R option or setKEYSTONE_REGION variable." >&2
missing_args="true"
fi
if [ -z "$SERVICE_TOKEN" ]; then
echo "Keystone service token not set. Please set with -T option orset SERVICE_TOKEN variable." >&2
missing_args="true"
fi
if [ -z "$SERVICE_ENDPOINT" ];then
echo "Keystone service endpoint not set. Please set with -E optionor set SERVICE_ENDPOINT variable." >&2
missing_args="true"
fi
if [ -z "$MYSQL_PASSWORD" ]; then
echo "MySQL password not set. Please set with -p option or setMYSQL_PASSWORD variable." >&2
missing_args="true"
fi
if [ -n "$missing_args" ]; then
exit 1
fi
keystone service-create --name nova --typecompute --description 'OpenStack Compute Service'
keystone service-create --name cinder--type volume --description 'OpenStack Volume Service'
keystone service-create --name glance--type image --description 'OpenStack Image Service'
keystone service-create --name keystone--type identity --description 'OpenStack Identity'
keystone service-create --name ec2 --typeec2 --description 'OpenStack EC2 service'
keystone service-create --name quantum--type network --description 'OpenStack Networking service'
create_endpoint () {
case $1 in
compute)
keystone endpoint-create --region$KEYSTONE_REGION --service-id
;;
volume)
keystone endpoint-create --region $KEYSTONE_REGION --service-id
;;
image)
keystone endpoint-create --region $KEYSTONE_REGION --service-id
;;
identity)
keystone endpoint-create --region $KEYSTONE_REGION --service-id
;;
ec2)
keystone endpoint-create --region $KEYSTONE_REGION --service-id
;;
network)
keystone endpoint-create --region $KEYSTONE_REGION --service-id
;;
esac
}
for i in compute volume image object-storeidentity ec2 network; do
id=mysql -h "$MYSQL_HOST" -u "$MYSQL_USER"-p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECTid FROM service WHERE type='"$i"';"
|| exit 1
create_endpoint $i $id
done
Set environment variables, otherwise it is inconvenient for you to use the keyston command line to query with many parameters
root@control:~# cat creds
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
exportOS_AUTH_URL="http://172.16.10.200:5000/v2.0/"
root@control:~# source creds
Check keystone results
root@control:~# keystone user-list
+----------------------------------+---------+---------+--------------------+
| id | name | enabled | email |
+----------------------------------+---------+---------+--------------------+
| 546 b18d85b9a4bf8b548bd08e8ecfe87 | admin | True | [email protected] |
| a0dbcb1c75814ab285ea0ddc4a156dd6 | cinder | True | [email protected] |
| 1 a860d4cd8244bb3bc19e9cfe8259e60 | demo | True | [email protected] |
| 08725 b7243854901bb0835be1e3a8c5e | glance | True | [email protected] |
| 1 dcb939697e04229ae14abe02fce6d6f | nova | True | [email protected] |
| 5 e447437acc148d88e386989d62da44d |quantum | True | [email protected] |
+----------------------------------+---------+---------+--------------------+
root@control:~# keystone endpoint-list
+----------------------------------+-----------+--------------------------------------------+------------------------------------------+------------------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+--------------------------------------------+------------------------------------------+------------------------------------------+----------------------------------+
| 12 eac4b2ed91404f93f2235cbaa446f3 |RegionOne | http://172.16.10.200:9292/ | http://10.10.10.51:9292/ | http://10.10.10.51:9292/ | 1372321775df4b6c9d894d299412acc5 |
| 37 c6d6ce5e954f449cac46194ea077d0 |RegionOne | http://172.16.10.200:8776/v1/
| 3 e9f2ce578e248b5945a099f69141312 |RegionOne | http://172.16.10.200:8773/services/Cloud | http://10.10.10.51:8773/services/Cloud | http://10.10.10.51:8773/services/Admin | 46be108df1084f5e9a2702ecfd517aa3 |
| 5240 aec5803b4d7094b66dfa4ecd6c55 |RegionOne | http://172.16.10.200:9696/ | http://10.10.10.51:9696/ | http://10.10.10.51:9696/ | 6e4dacf1c7984a7baa216aeec2e5831d |
| 8128 afab6f034d03820704fe8d7fc817 |RegionOne | http://172.16.10.200:5000/v2.0 | http://10.10.10.51:5000/v2.0 | http://10.10.10.51:35357/v2.0 | a68ed6a40998477abc0990bd56dcbd86 |
| 960 af7efa1b84a778fb6c40a6015a497 |RegionOne | http://172.16.10.200:8774/v2/
+----------------------------------+-----------+--------------------------------------------+------------------------------------------+------------------------------------------+----------------------------------+
2.7 Glance
Install glance
apt-get install -y glance
Update /etc/glance/glance-api-paste.ini
[ filter:authtoken]
paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory
delay_auth_decision = true
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = 123456
Update /etc/glance/glance-registry-paste.ini
[ filter:authtoken]
paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = 123456
Update /etc/glance/glance-api.conf
sql_connection =mysql://glance:[email protected]/glance
[ paste_deploy]
flavor = keystone
Update /etc/glance/glance-registry.conf
sql_connection =mysql://glance:[email protected]/glance
[ paste_deploy]
flavor = keystone
Restart glance-api and glance-registry services
service glance-api restart; serviceglance-registry restart
Initialize the glance database
glance-manage db_sync
Download mirror upload mirror
wgethttps://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
glance image-create --name="CirrOS0.3.1" --disk-format=qcow2 \
root@control:~# filecirros-0.3.1-i386-disk.img
cirros-0.3.1-i386-disk.img: QEMU QCOW Image(v2), 41126400 bytes
View mirror
root@control:~# glance image-list
+--------------------------------------+--------------+-------------+------------------+------------+--------+
| ID | Name | Disk Format | Container Format |Size | Status |
+--------------------------------------+--------------+-------------+------------------+------------+--------+
| fe4210d1-783c-4b7b-9cfd-10f02f7d3c20 |cirros 0.3.1 | qcow2 | bare | 12251136 | active |
| 918 dd333-2e9d-4ad2-bcce-9c6be9aec81b |debian | vmdk |bare | 464421376 | active |
| 4 dd939cc-54ce-4af0-a170-3d6b778e651f |ubuntu-13.04 | qcow2 | bare | 233504768 | active |
| 43 c2bb24-2c4f-4b53-a2da-6ac5fa525dbd |win2003sp2 | qcow2 | bare | 1822621696 | active |
+--------------------------------------+--------------+-------------+------------------+------------+--------+
Note: Regarding the image, you can download it online or make it yourself, or you can import it from other virtualization platforms: such as vSphere's ovf template. The attachments include making windows image files and importing vmware ovf template files.
2.8. Quantum
Install quantum-server
apt-get install -y quantum-server
Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[ DATABASE]
sql_connection =mysql://quantum:[email protected]/quantum
[ OVS]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
[ SECURITYGROUP]
firewall_driver =quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Edit /etc/quantum/api-paste.ini
[ filter:authtoken]
paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = 123456
Update /etc/quantum/quantum.conf:
[ keystone_authtoken]
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = 123456
signing_dir =/var/lib/quantum/keystone-signing
Restart quantum service
service quantum-server restart
2.9. Nova
Install nova related packages
apt-get install -y nova-api nova-cert novncnova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor
Modify /etc/nova/api-paste.ini
[ filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = 123456
signing_dirname =/tmp/keystone-signing-nova
auth_version = v2.0
Modify /etc/nova/nova.conf
root@control:~# cat /etc/nova/nova.conf
[ DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
rabbit_host=10.10.10.51
nova_url=http://10.10.10.51:8774/v1.1/
sql_connection=mysql://nova:[email protected]/nova
root_helper=sudo nova-rootwrap/etc/nova/rootwrap.conf
use_deprecated_auth=false
auth_strategy=keystone
glance_api_servers=10.10.10.51:9292
image_service=nova.image.glance.GlanceImageService
novnc_enabled=true
novncproxy_base_url=http://172.16.10.200:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.10.10.51
vncserver_listen=0.0.0.0
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://10.10.10.51:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=123456
quantum_admin_auth_url=http://10.10.10.51:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=quantum
#- 1- firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
service_quantum_metadata_proxy = True
quantum_metadata_proxy_shared_secret =helloOpenStack
compute_driver=libvirt.LibvirtDriver
volume_api_class=nova.volume.cinder.API
volume_driver=nova.volume.driver.ISCSIDriver
enabled_apis=ec2,osapi_compute,metadata
osapi_volume_listen_port=5900
volume_group = cinder-volumes
volume_name_template = volume-%s
iscsi_helper=tgtadm
iscsi_ip_address=10.10.10.51
Initialize the nova database
nova-manage db sync
Restart Nova related services
cd /etc/init.d/; for i in $( ls nova-* );do sudo service $i restart; done
Check the startup of Nova related services
root@control:~# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert control internal enabled :-) 2013-10-28 09:56:13
nova-conductor control internal enabled :-) 2013-10-28 09:56:11
nova-consoleauth control internal enabled :-) 2013-10-28 09:56:11
nova-scheduler control internal enabled :-) 2013-10-28 09:56:13
nova-console control internal enabled :-) 2013-10-28 09:56:11
2.10. Cinder
Install cinder related packages
apt-get install -y cinder-apicinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms
Configure iscsi service
sed -i 's/false/true/g'/etc/default/iscsitarget
Restart service
service iscsitarget start
service open-iscsi start
Configure /etc/cinder/api-paste.ini
[ filter:authtoken]
paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 10.10.10.51
service_port = 5000
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = 123456
signing_dir = /var/lib/cinder
Edit /etc/cinder/cinder.conf
root@control:~# cat /etc/cinder/cinder.conf
[ DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscis_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
sql_connection =mysql://cinder:[email protected]/cinder
rabbit_host = 10.10.10.51
rabbit_password = guest
issci_ip_prefix = 10.10.10
rpc_backend = cinder.openstack.common.rpc.impl_kombu
iscsi_ip_address = 10.10.10.51
osapi_volume_extension =cinder.api.contrib.standard_extensions
Initialize the cinder database
cinder-manage db sync
Create a volume group named cinder-volumes, and the virtual machine has 2 hard disks added by default
Restart the cinder service
cd /etc/init.d/; for i in $( ls cinder-* );do sudo service $i restart; done
Confirm that the cinder services are running
cd /etc/init.d/; for i in $( ls cinder-* );do sudo service $i status; done
2.11. Horizon
Install horizon
apt-get install -y openstack-dashboardmemcached
If necessary, you can delete the ubuntu theme
dpkg --purgeopenstack-dashboard-ubuntu-theme
Restart Apache and memcached
service apache2 restart; service memcachedrestart
Log in to OpenStack Dashboard
http://172.16.10.200/horizon The login user name and password are admin, 123456
3.1. Ready to work
[ Installation source] (installation source)
apt-getinstall -y ubuntu-cloud-keyring
echodeb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzlymain >> /etc/apt/sources.list.d/grizzly.list
Update system
apt-getupdate -y
apt-getupgrade -y
apt-getdist-upgrade -y
Install ntp service
apt-getinstall -y ntp
Configure ntp to synchronize control node time
sed -i's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i's/server ntp.ubuntu.com/server 10.10.10.51/g' /etc/ntp.conf
Restart ntp service
servicentp restart
Install other software
apt-getinstall -y vlan bridge-utils
Turn on ip forwarding
sed -i's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
3.2. The internet
3 NIC initial configuration
auto eth0
iface eth0 inet static
address 10.10.10.52
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 10.20.20.52
netmask 255.255.255.0
auto eth2
iface eth2 inet static
address 172.16.10.201
netmask 255.255.255.0
3.4. OpenVSwitch did not do two steps separately according to github, mainly because he did not set up the Internet, he needs to install the quantum software package before doing the second part, I set eth2 and br-ex ip here to ensure that the Internet can be accessed, so Can be done together.
[ Install openVSwitch:](Install openVSwitch:) Note that all three softwares here must be installed. In addition to ovs, openstack also uses the system's brcompat
apt-get install openvswitch-switchopenvswitch-brcompat openvswitch-datapath-dkms
Set ovs-brcompatd to start:
sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g'/etc/default/openvswitch-switch
Start openvswitch-switch:
root@network:~# service openvswitch-switch restart
Killingovs-brcompatd (1327)
Killingovs-vswitchd (1195)
Killingovsdb-server (1185)
Startingovsdb-server
Configuring Open vSwitch system IDs
Startingovs-vswitchd
2013- 10- 29 T02:45:50Z|00001|brcompatd|WARN|Bridgecompatibility is deprecated and may be removed no earlier than February 2013
Until ovs-brcompatd, ovs-vswitchd, ovsdb-server and other services are started
And check the brcompat module
brcompat 13512 0
openvswitch 84038 7 brcompat
If brcompat still cannot be started, execute the following command:
/etc/init.d/openvswitch-switch force-reload-kmod
If you can't restart the server anymore, ubuntu13.04 (64-bit) generally installs the above 3 software and can start successfully, without other additional operations.
Create a bridge
ovs-vsctl add-br br-int # br-int is used for vm integration
ovs-vsctl add-br br-ex # br-ex is used to access vm from the Internet
ovs-vsctl add-port br-ex eth2 # br-ex bridge to eth2
After doing the above operations, the eth2 network card is not working, you need to modify the network card configuration file
The final configuration of the network card:
root@network:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.10.10.52
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 10.20.20.52
netmask 255.255.255.0
auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
auto br-ex
iface br-ex inet static
address 172.16.10.201
netmask 255.255.255.0
gateway 172.16.10.254
dns-nameservers 8.8.8.8
Then restart the server or network, and there is no problem with the Internet and intranet connection before proceeding to the next step.
View bridged networks
ovs-vsctl list-br
ovs-vsctl show
3.5. Quantum
Install Quantum openvswitch agent (layer 2 switching), l3 agent (layer 3 routing) and dhcpagent
apt-get -y installquantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agentquantum-metadata-agent
Edit /etc/quantum/api-paste.ini
[ filter:authtoken]
paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = 123456
Edit OVS plugin configuration /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[ DATABASE]
sql_connection =mysql://quantum:[email protected]/quantum
[ OVS]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.20.20.52
enable_tunneling = True
[ SECURITYGROUP]
firewall_driver =quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Set /etc/quantum/quantum.conf
root@network:~# cat/etc/quantum/quantum.conf |grep -v ^#|grep -v ^$
[ DEFAULT]
lock_path = $state_path/lock
bind_host = 0.0.0.0
bind_port = 9696
core_plugin =quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
api_paste_config =/etc/quantum/api-paste.ini
control_exchange = quantum
fake_rabbit = False
rabbit_host = 10.10.10.51
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
notification_driver =quantum.openstack.common.notifier.rpc_notifier
default_notification_level = INFO
notification_topics = notifications
[ QUOTAS]
[ DEFAULT_SERVICETYPE]
[ AGENT]
root_helper = sudo quantum-rootwrap/etc/quantum/rootwrap.conf
[ keystone_authtoken]
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = 123456
signing_dir =/var/lib/quantum/keystone-signing
Update /etc/quantum/metadata_agent.ini (communication with control node)
root@network:~# cat/etc/quantum/metadata_agent.ini |grep -v^# |grep -v ^$
[ DEFAULT]
auth_url = http://10.10.10.51:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = 123456
nova_metadata_ip = 10.10.10.51
nova_metadata_port = 8775
metadata_proxy_shared_secret =helloOpenStack
/etc/quantum/l3_agent.ini and /etc/quantum/dhcp_agent.ini configuration files are not changed
Set sudo permissions
root@network:~# cat/etc/sudoers.d/quantum_sudoers
quantum ALL=NOPASSWD: ALL
Restart all quantum services
cd /etc/init.d/; for i in $( ls quantum-*); do sudo service $i restart; done
Check the status of all services, check the log under quantum
cd /etc/init.d/; for i in $( ls quantum-*); do sudo service $i status; done
4.1. Prepare node
Installation source
root@c03:/var/log/nova# cat/etc/apt/sources.list.d/cloud-archive.list
debhttp://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
root@c03:/var/log/nova# cat/etc/apt/sources.list.d/grizzly.list
deb http://archive.gplhost.com/debiangrizzly main
deb http://archive.gplhost.com/debiangrizzly-backports main
Update system
apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y
Install ntp service
apt-get install -y ntp
Configure ntp to synchronize control node time
sed -i 's/server0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server ntp.ubuntu.com/server10.10.10.51/g' /etc/ntp.conf
Restart ntp service
service ntp restart
Install other software
apt-get install -y vlan bridge-utils
Turn on ip forwarding
sed -i's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
4.2. Network Configuration
Among them, eth2 is used to download the software package and can be deleted after use
root@c03:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth2
iface eth2 inet dhcp
auto eth0
iface eth0 inet static
address 10.10.10.55
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 10.20.20.55
netmask 255.255.255.0
4.3 Install nova computing package
note:
nova-compute-kvm requires that your CPU supportshardware-assisted
virtualization (HVM) such as Intel VT-x orAMD-V. If your CPU does not
support this, or if you are already runningin a virtualized environment, you
can instead use the nova-compute-qemupackage. This package provides
software-based virtualization.
Modify the autotoken verification section in /etc/nova/api-paste.ini
[ filter:authtoken]
paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = 123456
signing_dir = /tmp/keystone-signing-nova
auth_version = v2.0
Modify /etc/nova/nova.conf
root@c03:~# cat /etc/nova/nova.conf
[ DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
iscsi_ip_address=10.10.10.51
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap/etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
verbose=True
my_ip=10.10.10.55
rabbit_host = 10.10.10.51
rabbit_password = guest
auth_strategy=keystone
ec2_host=10.10.10.51
ec2_url=http://10.10.10.51:8773/services/Cloud
libvirt_use_virtio_for_bridges=True
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://10.10.10.51:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=123456
quantum_admin_auth_url=http://10.10.10.51:35357/v2.0
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=quantum
compute_driver=libvirt.LibvirtDriver
connection_type=libvirt
volume_api_class=nova.volume.cinder.API
volume_driver=nova.volume.driver.ISCSIDriver
enabled_apis=ec2,osapi_compute,metadata
osapi_volume_listen_port=5900
cinder_catalog_info=volume:cinder:internalURL
iscsi_helper=tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
glance_api_servers=10.10.10.51:9292
image_service=nova.image.glance.GlanceImageService
novnc_enabled=true
novncproxy_base_url=http://172.16.10.200:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.10.10.55
vncserver_listen=0.0.0.0
View libvir type
root@c03:~# cat /etc/nova/nova-compute.conf
[ DEFAULT]
libvirt_type=qemu
compute_driver=libvirt.LibvirtDriver
Delete the default virtual bridge, it will not be affected if it is not deleted
virsh net-destroy default
virsh net-undefine default
Start nova-compute service
service nova-compute restart
Check nova related service smiley
root@control:~# nova-manage service list|grep -v c01 |grep -v c02
Binary Host Zone Status State Updated_At
nova-cert control internal enabled :-) 2013-10-29 04:29:43
nova-conductor control internal enabled :-) 2013-10-29 04:29:38
nova-consoleauth control internal enabled :-) 2013-10-29 04:29:42
nova-scheduler control internal enabled :-) 2013-10-29 04:29:43
nova-compute c03 nova enabled :-) 2013-10-29 04:29:36
nova-console control internal enabled :-) 2013-10-29 04:29:43
4.4. OpenVSwitch
Install openVSwitch: Note that the three softwares here must be installed. In addition to using ovs, openstack also uses the system's brcompat
apt-get install openvswitch-switchopenvswitch-brcompat openvswitch-datapath-dkms
Set ovs-brcompatd to start:
sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g'/etc/default/openvswitch-switch
Start openvswitch-switch:
Killing ovs-brcompatd (1327)
Killing ovs-vswitchd (1195)
Killing ovsdb-server (1185)
Starting ovsdb-server
Configuring Open vSwitch system IDs
Starting ovs-vswitchd
2013- 10- 29 T02:45:50Z|00001|brcompatd|WARN|Bridgecompatibility is deprecated and may be removed no earlier than February 2013
Until ovs-brcompatd, ovs-vswitchd, ovsdb-server and other services are started
And check the brcompat module
brcompat 13512 0
openvswitch 84038 7 brcompat
If brcompat still cannot be started, execute the following command:
/etc/init.d/openvswitch-switchforce-reload-kmod
If you can't restart the server anymore, ubuntu13.04 (64-bit) generally installs the above 3 software and can start successfully, without other additional operations.
Create a br-int bridge
ovs-vsctl add-br br-int
4.5. Quantum
Install Quantum openvswitch agent:
apt-get install quantum-plugin-openvswitch-agent
Edit the OVS plug-in configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
root@c03:~# cat/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini |grep -v ^# |grep -v ^$
[ DATABASE]
sql_connection =mysql://quantum:[email protected]/quantum
reconnect_interval = 2
[ OVS]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
local_ip = 10.20.20.55
enable_tunneling = True
[ AGENT]
polling_interval = 2
[ SECURITYGROUP]
firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Edit /etc/quantum/quantum.conf:
root@c03:~# cat/etc/quantum/quantum.conf|grep -v ^# |grep -v ^$
[ DEFAULT]
verbose = True
lock_path = $state_path/lock
bind_host = 0.0.0.0
bind_port = 9696
core_plugin =quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
api_paste_config =/etc/quantum/api-paste.ini
control_exchange = quantum
fake_rabbit = False
rabbit_host = 10.10.10.51
rabbit_password = guest
notification_driver = quantum.openstack.common.notifier.rpc_notifier
default_notification_level = INFO
notification_topics = notifications
[ QUOTAS]
[ DEFAULT_SERVICETYPE]
[ AGENT]
root_helper = sudo quantum-rootwrap/etc/quantum/rootwrap.conf
[ keystone_authtoken]
auth_host = 10.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = 123456
signing_dir =/var/lib/quantum/keystone-signing
Start the service:
service quantum-plugin-openvswitch-agentrestart
Install other software:
If the instance wants to successfully attach the volume created by cinder, it needs to install some shop assistant software. This is mentioned in the troubleshooting on the official website, and it is generally not found (because the system is minimally installed, mysql-cient has no Install)
libsysfs2_2.1.0+repack-2_amd64.deb multipath-tools_0.4.9-3ubuntu7_amd64.deb sg3-utils_1.33-1build1_amd64.deb
mysql-client-core-5.5_5.5.32-0ubuntu0.13.04.1_amd64.deb sysfsutils_2.1.0+repack-2_amd64.deb
5 . Start creating vm
You can directly refer to the official website documents for these two points, but they are both in command line mode, which seems clever, but it is simple and clear that it is not logged in to the dashboard console.
5.1 Create quantum network
The basic steps are summarized as follows:
Build tenants (each tenant can have its own network and virtual machine, but share an external mining network)
Build extranet name
Build an external network subnet (the floatingip range is set inside, and it is used for mapping with the internal virtual machine's internal network ip, so that the external can access the virtual machine)
Built-in network name
Build intranet subnet (assigned to virtual machine instance)
Build a router
Set a gateway (external network) for the router and add a network segment (internal network) so that the external network and the internal network can be connected through the router.
Note that there is a command to check whether the network environment is intact:
root@control:~# quantum agent-list
+--------------------------------------+--------------------+---------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+---------+-------+----------------+
| 03 d00de0-d78e-47ae-8b64-10971e140b45 |Open vSwitch agent | network | :-) |True |
| a7c840b3-de73-4ee4-8e1f-acb4ff9b2046 | L3agent | network | :-) | True |
| b6b66ba7-e733-42a4-bdf0-796787d48955 |DHCP agent | network | :-) | True |
| c3ce6c66-8a9a-4786-8d92-5325df54e0f0 |Open vSwitch agent | c03 | :-) | True |
+--------------------------------------+--------------------+---------+-------+----------------+
The built network topology is as follows:
Admin tenant
The admin tenant has more permissions to see all network topologies, but the virtual machines in other tenants cannot see
Demo tenant's
The tenant has two network segments, 192.168.1.0/24 and 192.168.2.0/24
The two network segments can communicate through the external ip connection, as can be seen from the network diagram on the figure, the same internal network segment can communicate with each other without corresponding routing.
5.2 virtual machine
Launch instance name, select mirror, select network, start it
VNC access, note that the image downloaded from the official website uses the key to log in with ssh. It is configured in sshd_config to not allow user name and password to log in. You can download the key and log in with ssh, or you can change the sshd configuration in vnc. Make a mirror image.
After installing the other software above, finally attached
The image debian here is imported from the ovf template generated by esxi5, windows2003 is made by itself with kvm virtual machine, and other things are downloaded from the Internet.
The following is the format supported by the import mirror
Attachment: windows mirror production
Openstack makes windows2003 mirror
ready
Download virtio-win-1.1.16.vfd
virtio-win-0.1-65.iso
windows_sp2.iso
Start
qemu-img create -f raw windows2003.img 8G
sudo qemu-kvm -m 512-no-reboot -boot order=d -drive file=windows2003.img,if=virtio,boot=off -drive file=WIN2003_SP2.iso,media=cdrom,boot=on -fda virtio-win-1.1.16.vfd -bootorder=d,menu=on -usbdevice tablet -nographic -vnc :1
Then quickly use vncviewer 127.0.0.1:5901 to access and view, press F12 to jump to the menu option, otherwise it will automatically enter the hard disk boot mode, if you accidentally enter, please kill the kvm process, restart kvm and try to quickly press F12
Default CD boot
Press F8 to enter the partition, restart after formatting, and the process will restart.
sudoqemu-kvm -m 512 -no-reboot -boot order=d -drivefile=windows2003.img,if=virtio,boot=off -drivefile=WIN2003_SP2.iso,media=cdrom,boot=on -fda virtio-win-1.1.16.vfd -bootorder=d,menu=on -usbdevice tablet -nographic -vnc :1
Press F12 to select hard disk to start
Restart the installation.
After the installation is complete, shut down the virtual machine
Restart the virtual machine image and load the virtio driver
sudo qemu-kvm -m512 -drive file=windows2003.img -cdromvirtio-win-0.1-65.iso -net nic,model=virtio -net user -boot order=c -usbdevicetablet -nographic -vnc :1
After installing the virtio driver, shut down and install management tools
sudo qemu-kvm -m512 -drive file=windows2003r.img -cdromWIN2003_SP2.iso -net nic,model=virtio -net user -boot order=c -usbdevice tablet-nographic -vnc :1
Upload
glance addname="win2003X64" is_public=true container_format=ovf disk_format=raw < windows2003.img