openstack-Icehouse版本部署安装
OpenStack:
IaaS云棧,CloudOS
私有云(公司內建使用)
公有云(租用云提供商)
混合云(租用和自建)
OpenStack組件:
Dashboard:Horizon,WebGUI;
Compute:Nova,管理VM的整個生命周期,主要職責創建、調度、啟動虛擬機實例;
Networking:Neutron,早期叫Quantum,獨立之前為nova-networking,啟動網絡連接及服務,能夠為用戶提供按需創建網絡連接API,插件化設計,支持更多網絡服務提供商提供的網絡框架,支持openvswitch;
Object Storage:Swift,通過RESTful接口提供存儲和檢索沒有結構化的數據對象,它是高可容錯數據復制及伸縮架構,分布式存儲;
Block Storage:Cinder,早起由Nova提供,代碼為nova-storate,為虛擬機提供持久的塊存儲能力;
Identify service:Keystone,為OpenStack所有服務提供認證授權服務及訪問端點邊路服務;
Image service:Glance,用于存儲和檢索磁盤映像文件;
Telemetry:Cilometer,用于實現監控和計量服務的實現;
Orchestration:Heat,用于多組件聯動
Database service:Trove,提供DBaaS服務的實現;
Data processing service:Sahara,用于在OpenStack中實現Hadoop的管理;
OpenStack capabilities:
VMs on demand
provisioning
snapshotting
Volumes
Networks
Multi-tenancy
quotas for different users
user can be associated with multiple tenants
Object storage for VM images and arbitrary files
Cloud Computing
Cloud Service Model
Cloud Deployment Model
OpenStack基本組件
OpenStack軟件環境
OpenStack Projects:
OpenStack Compute(code-name Nova)
core project since Austin release
OpenStack Networking(code-name Neutron)
core project since Folsom release
OpenStack Object Storage(code-name Swift)
core project since Austin release
OpenStack Block Storage(code-name Cinder)
core project since Folsom release
OpenStack Identity(code-name Keystone)
core project since Essex release
OpenStack Image Service (code-name Glance)
core project since Bexar release
OpenStack Dashboard(code-name Horizon)
core project since Essex release
OpenStack Metering(code-name Ceilometer)
core project since the Havana release
OpenStack Orchestration(code-name Heat)
core project since the Havana release
OpenStack conceptual architecture(Havana)
Openstack Logical Architecture
OpenStack conceptual arch
Request Flow for Provisioning Instance
Two-node architecture with legacy networking(nova-network)
Three-node architecture with OpenStack Networking(neutron)
OpenStack安裝配置:
KeyStone:
Identity:主要由兩個功能
用戶管理:認證和授權
認證方式有兩種:
token,
賬號和密碼;
服務目錄:所有可用服務的信息庫,包含其API endpoint路徑;
核心術語:
User:一個user可以關聯至多個tenant
Tenant:租戶,一個tenant對應于一個project,或一個組織
Role:角色
Token:令牌,用于認證及授權等
Service:服務
Endpoint:服務的訪問入口
epel6 icehouse 安裝源:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/
管理工具:keystone-manager
客戶端程序:keystone
OpenStack services and clients
Logical Architecture - keystone
Backend
Token Driver:kvs/memcached/sql
Catalog Driver:ksvs/sql/template
Identity Driver:kvs/sql/ldapap/pam
Policy Driver:rules
Auth Token Usage
Image Service:
代碼名:Glance,用于在OpenStack中注冊、發現及獲取磁盤映像文件;
VM的映像文件存儲于何處?
普通文件系統、對象存儲系統(swift)、S3存儲,以及HTTP服務上;
Logical Architecture - glance
glance訪問機制
Glance組件:
glance-api
glance的API服務接口,負責接收對Image Service API中影像文件的查看、下載及存儲請求;
glance-registry
存儲、處理及獲取映像文件的元數據,例如映像文件的大小及類型;
database
存儲映像文件元數據;
映像文件存儲倉庫
支持多種類型的映像文件存儲機制,包括使用普通的文件系統、對象存儲、RADOS塊設備、HTTP以及Amazon的S3等;
Glance Architecture
Glance存儲方式
磁盤映像文件:
(1)制作,Oz(KVM)、VMBuilder(KVM,XEN)、VeeWee(KVM)、imagefactory
(2)獲取別人制作模板,CirrOS、Ubuntu、Fedora、OpenSUSE、Rackspace云映像文件生成器
OpenStack中的磁盤映像文件要滿足以下要求:
(1)支持由OpenStack獲取其元數據信息和修改數據;
(2)支出對映像文件的大小進行調整;
Compute Service
代碼為Nova
安裝qpid:
# yum install qpid-cpp-server
編輯配置文件,設置"auth-no"
# server qpidd start
Logical Architecture - nova-compute
Compute服務的角色:
管理角色
hypervisor
注意:配置compute節點時,額外需要在[DEFAULT]配置段設定的參數;
vif_plugging_timeout=10
vif_plugging_is_fatal=false
Network Service:
代碼為:Neutron,早期叫Quantum;
OpenStack中物理網絡連接架構:
管理網絡(management network):
數據網絡(data network):
外部網絡(external network):
API網絡:
Openstack Networking Architecture
Logical Architecture - neutron
Flat Network
Network bridge
'dnsmasq' DHCP server
Bridge as default gw
Limitations:
Single L2 domain and ARP space,notenant isolation
Singele IP poll
Flat Network Deployment
Flat Network Manger
Vlan Network
Network bridge
Vlan interface over physical interface
Limitations:
Scaling limit in 4k vlan tags
Vlans are configured manually on switch by admin
May have issues with overlapping MACs
Vlan Network Deployment
Neutron
It provides "network connectivity as a service" between interface devices(e.g.,vNICs)managed by other OpenStack service(e.g.,nova)
The service works by allowing users to create their own netorks and then attach interfaces to them
Neutron has a pluggable architecture to support many popular networking vendors and technologies
neutron-server accept API requests and route them to the correct neutron plugin
Plugins and agents perform actual actions,like plug/unplug ports,creating networks and subnets and IP addresing
It also has a message queue to route info between neutron-server and various agents
It has a neutron database to store networking state for particular plugins
DashBoard:
Python Django
Web Framewerk
啟動虛擬機實例流程
注意事項:
1、Neutron的配置文件中要把auth-uri換成identity_uri;
2、各配置文件屬組該為相應服務的運行者用戶身份,否則其將導致服務啟動失敗;
Block Storage Service
代碼名:Cinder
組件:
cinder-api
cinder-volume
cinder-scheduler
部署工具:
fuel:mirantis
devstack
**實驗環境:
Controller:
操作系統:cento6.6
內核版本:2.6.32-754.12.1.el6.x86_64
網卡1:vm0 192.168.10.6
網卡2:vm8 192.168.243.138
Compute1:
操作系統:cento6.6
內核版本:2.6.32-754.12.1.el6.x86_64
網卡1:vm0 192.168.10.7
網卡2:vm1
網絡節點:
操作系統:cento6.6
內核版本:2.6.32-754.12.1.el6.x86_64
網卡1:vm0 192.168.10.8
網卡2:vm1
網卡3:vm8
存儲節點:
操作系統:cento6.6
內核版本:2.6.32-754.12.1.el6.x86_64
網卡1:vm0 192.168.10.9
**
Controller:
Compute1:
[root@compute1 ~]# hostname compute1.smoke.com [root@compute1 ~]# vim /etc/hosts 192.168.10.6 controller.smoke.com controller 192.168.10.7 compute1.smoke.com compute1 192.168.10.8 network.smoke.com network 192.168.10.9 stor1.smoke.com stro1 [root@compute1 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:90:D0:92inet addr:192.168.10.7 Bcast:192.168.10.255 Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fe90:d092/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:2593 errors:0 dropped:0 overruns:0 frame:0TX packets:1187 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:259629 (253.5 KiB) TX bytes:120828 (117.9 KiB)eth1 Link encap:Ethernet HWaddr 00:0C:29:90:D0:9Cinet6 addr: fe80::20c:29ff:fe90:d09c/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:1458 errors:0 dropped:0 overruns:0 frame:0TX packets:27 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:216549 (211.4 KiB) TX bytes:2110 (2.0 KiB)lo Link encap:Local Loopbackinet addr:127.0.0.1 Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING MTU:65536 Metric:1RX packets:2 errors:0 dropped:0 overruns:0 frame:0TX packets:2 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:168 (168.0 b) TX bytes:168 (168.0 b) [root@compute1 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 0.0.0.0 192.168.10.6 0.0.0.0 UG 0 0 0 eth0 [root@compute1 ~]# crontab -l #time sync by haojiang at 2019-01-10 */5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/nullController:
[root@controller ~]# iptables -t nat -A POSTROUTING -s 192.168.10.0/24 -j SNAT --to-source 192.168.243.138 [root@controller ~]# service iptables save [root@controller ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 [root@controller ~]# sysctl -pNetwork:
[root@network ~]# hostname network.smoke.com [root@network ~]# vim /etc/hosts 192.168.10.6 controller.smoke.com controller 192.168.10.7 compute1.smoke.com compute1 192.168.10.8 network.smoke.com network 192.168.10.9 stor1.smoke.com stro1 [root@network ~]# chkconfig NetworkManager off [root@network ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:D6:6A:92inet addr:192.168.10.8 Bcast:192.168.10.255 Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fed6:6a92/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:325 errors:0 dropped:0 overruns:0 frame:0TX packets:317 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:33099 (32.3 KiB) TX bytes:36628 (35.7 KiB)eth1 Link encap:Ethernet HWaddr 00:0C:29:D6:6A:9Cinet6 addr: fe80::20c:29ff:fed6:6a9c/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:1 errors:0 dropped:0 overruns:0 frame:0TX packets:18 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:342 (342.0 b) TX bytes:1404 (1.3 KiB)eth2 Link encap:Ethernet HWaddr 00:0C:29:D6:6A:A6inet addr:192.168.243.129 Bcast:192.168.243.255 Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fed6:6aa6/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:196 errors:0 dropped:0 overruns:0 frame:0TX packets:58 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:14099 (13.7 KiB) TX bytes:4981 (4.8 KiB)lo Link encap:Local Loopbackinet addr:127.0.0.1 Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING MTU:65536 Metric:1RX packets:8 errors:0 dropped:0 overruns:0 frame:0TX packets:8 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:600 (600.0 b) TX bytes:600 (600.0 b) [root@network ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.243.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 eth2 0.0.0.0 192.168.243.2 0.0.0.0 UG 0 0 0 eth2 [root@network ~]# crontab -l #time sync by haojiang at 2019-01-10 */5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/nullstor1:
[root@stor1 ~]# hostname stor1.smoke.com [root@stor1 ~]# vim /etc/hosts 192.168.10.6 controller.smoke.com controller 192.168.10.7 compute1.smoke.com compute1 192.168.10.8 network.smoke.com network 192.168.10.9 stor1.smoke.com stro1 [root@stor1 ~]# chkconfig NetworkManager off [root@stor1 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:C7:68:35inet addr:192.168.10.9 Bcast:192.168.10.255 Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fec7:6835/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:9710 errors:0 dropped:0 overruns:0 frame:0TX packets:2721 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:900402 (879.2 KiB) TX bytes:200470 (195.7 KiB)lo Link encap:Local Loopbackinet addr:127.0.0.1 Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING MTU:65536 Metric:1RX packets:757 errors:0 dropped:0 overruns:0 frame:0TX packets:757 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:266573 (260.3 KiB) TX bytes:266573 (260.3 KiB) [root@stor1 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 0.0.0.0 192.168.10.6 0.0.0.0 UG 0 0 0 eth0 [root@stor1 ~]# crontab -l #time sync by haojiang at 2019-01-10 */5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null安裝Mariadb:
[root@controller ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo [openstack-icehouse] name=OpenStack Icehouse Repository baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6 enabled=1 skip_if_unavailable=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse priority=98 openstack-icehouse yum源出現:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/repodata/repomd.xml: [Errno 14] problem making ssl connection [root@controller ~]# mv /etc/yum.repos.d/openstack-Icehouse.repo /etc/yum.repos.d/openstack-Icehouse.repo.bak [root@controller ~]# yum -y update #方法一 [root@controller ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo #方法二 enabled=0 [root@controller ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak [root@controller ~]# yum -y install ca-certificates [root@controller ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo enabled=1 [root@controller ~]# yum -y update curl #方法三 [root@controller ~]# yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm epel yum源出現:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again [root@controller ~]# vim /etc/yum.repos.d/epel.repo #注釋mirrorlist,取消baseurl注釋; [root@controller ~]# vim /etc/yum.repos.d/epel-testing.repo [root@controller ~]# yum -y install mariadb-galera-server [root@controller ~]# service mysqld start [root@controller ~]# chkconfig mysqld on安裝keystone:
[root@controller ~]# yum -y install openstack-keystone python-keystoneclient openstack-utils [root@controller ~]# mysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 3 Server version: 5.5.44-MariaDB-log Source distributionCopyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [(none)]> CREATE DATABASE keystone; Query OK, 1 row affected (0.04 sec)MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone'; Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone'; Query OK, 0 rows affected (0.01 sec)mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller.smoke.com' IDENTIFIED BY 'keystone'; Query OK, 0 rows affected (0.04 sec)MariaDB [(none)]> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> exit Bye [root@controller ~]# su -s /bin/sh -c 'keystone-manage db_sync' keystone #完成數據庫同步 [root@controller ~]# vim /etc/keystone/keystone.conf [database] connection=mysql://keystone:keystone@192.168.10.6/keystone [root@controller ~]# ADMIN_TOKEN=$(openssl rand -hex 10) [root@controller ~]# echo $ADMIN_TOKEN 010a2a38a44d76e269ed [root@controller ~]# echo $ADMIN_TOKEN > .admin_token.rc [root@controller ~]# vim /etc/keystone/keystone.conf [DEFAULT] admin_token=010a2a38a44d76e269ed [root@controller ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# chown -R keystone:keystone /etc/keystone/ssl/ [root@controller ~]# chmod -R o-rwx /etc/keystone/ssl/ [root@controller ~]# service openstack-keystone start [root@controller ~]# chkconfig openstack-keystone on [root@controller ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN [root@controller ~]# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0 [root@controller ~]# keystone --os-token $ADMIN_TOKEN user-list [root@controller ~]# keystone user-list [root@controller gmp-6.1.0]# keystone user-create --name=admin --pass=admin --email=admin@smoke.com +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | admin@smoke.com | | enabled | True | | id | ece7cbced7b84c9c917aac88ee7bd8a1 | | name | admin | | username | admin | +----------+----------------------------------+ [root@controller ~]# keystone user-list +----------------------------------+-------+---------+-----------------+ | id | name | enabled | email | +----------------------------------+-------+---------+-----------------+ | ece7cbced7b84c9c917aac88ee7bd8a1 | admin | True | admin@smoke.com | +----------------------------------+-------+---------+-----------------+ [root@controller ~]# keystone role-create --name=admin +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 47debc9436274cd4b81375ab9948cf70 | | name | admin | +----------+----------------------------------+ [root@controller ~]# keystone role-list +----------------------------------+----------+ | id | name | +----------------------------------+----------+ | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | 47debc9436274cd4b81375ab9948cf70 | admin | +----------------------------------+----------+ [root@controller ~]# keystone tenant-create --name=admin --description="Admin Tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Admin Tenant | | enabled | True | | id | bbdc7fe3de4448b19e877902e8274736 | | name | admin | +-------------+----------------------------------+ [root@controller ~]# keystone user-role-add --user admin --role admin --tenant admin [root@controller ~]# keystone user-role-add --user admin --role _member_ --tenant admin [root@controller ~]# keystone user-role-list --user admin --tenant admin +----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | ece7cbced7b84c9c917aac88ee7bd8a1 | bbdc7fe3de4448b19e877902e8274736 | | 47debc9436274cd4b81375ab9948cf70 | admin | ece7cbced7b84c9c917aac88ee7bd8a1 | bbdc7fe3de4448b19e877902e8274736 | +----------------------------------+----------+----------------------------------+----------------------------------+ [root@controller ~]# keystone user-create --name=demo --pass=demo --email=demo@smoke.com +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | demo@smoke.com | | enabled | True | | id | 0dba9dc5d4ff414ebfd1b5844d7b92a3 | | name | demo | | username | demo | +----------+----------------------------------+ [root@controller ~]# keystone tenant-create --name=demo --description="Demo Tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Demo Tenant | | enabled | True | | id | 9a748acebd0741f5bb6dc3875772cf0a | | name | demo | +-------------+----------------------------------+ [root@controller ~]# keystone user-role-add --user=demo --role=_member_ --tenant=demo [root@controller ~]# keystone user-role-list --tenant=demo --user=demo +----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | 0dba9dc5d4ff414ebfd1b5844d7b92a3 | 9a748acebd0741f5bb6dc3875772cf0a | +----------------------------------+----------+----------------------------------+----------------------------------+ [root@controller ~]# keystone tenant-create --name=service --description="Service Tenant" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Service Tenant | | enabled | True | | id | f37916ade2bc44adae440ab13f31d9cf | | name | service | +-------------+----------------------------------+ [root@controller ~]# keystone service-create --name=keystone --type=identity --description="OpenStack Identity" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Identity | | enabled | True | | id | 4fea5e76c58c4a4c8d2e58ad8bf7a268 | | name | keystone | | type | identity | +-------------+----------------------------------+ [root@controller ~]# keystone service-list +----------------------------------+----------+----------+--------------------+ | id | name | type | description | +----------------------------------+----------+----------+--------------------+ | 4fea5e76c58c4a4c8d2e58ad8bf7a268 | keystone | identity | OpenStack Identity | +----------------------------------+----------+----------+--------------------+ [root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://controller:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:35357/v2.0 | | id | f1e088348590453e9cfc2bfdbf0d1c96 | | internalurl | http://controller:5000/v2.0 | | publicurl | http://controller:5000/v2.0 | | region | regionOne | | service_id | 4fea5e76c58c4a4c8d2e58ad8bf7a268 | +-------------+----------------------------------+ [root@controller ~]# keystone endpoint-list +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+ | f1e088348590453e9cfc2bfdbf0d1c96 | regionOne | http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | 4fea5e76c58c4a4c8d2e58ad8bf7a268 | +----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+ [root@controller ~]# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT [root@controller ~]# keystone --os-username=admin --os-password=admin --os-auth-url=http://controller:35357/v2.0 token-get [root@controller ~]# vim .admin-openrc.sh export OS_USERNAME=admin export OS_PASSWORD=admin export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0 [root@controller ~]# . .admin-openrc.sh [root@controller ~]# keystone user-list +----------------------------------+-------+---------+-----------------+ | id | name | enabled | email | +----------------------------------+-------+---------+-----------------+ | ece7cbced7b84c9c917aac88ee7bd8a1 | admin | True | admin@smoke.com | | 0dba9dc5d4ff414ebfd1b5844d7b92a3 | demo | True | demo@smoke.com | +----------------------------------+-------+---------+-----------------+安裝glance:
[root@controller ~]# yum -y install openstack-glance python-glanceclient [root@controller ~]# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 16 Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> CREATE DATABASE glance CHARACTER SET utf8; Query OK, 1 row affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON glance.* TO glance@'%' IDENTIFIED BY 'glance'; Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON glance.* TO glance@'localhost' IDENTIFIED BY 'glance'; Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON glance.* TO glance@'controller.smoke.com' IDENTIFIED BY 'glance'; Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec)mysql> exit Bye [root@controller ~]# vim /etc/glance/glance-api.conf [database] connection=mysql://glance:glance@192.168.10.6/glance [root@controller ~]# vim /etc/glance/glance-registry.conf [database] connection=mysql://glance:glance@192.168.10.6/glance 同步數據庫報錯為ImportError: No module named Crypto.Random,主要是因為缺少pycrypto模塊,通過pip install pycryton安裝,如果沒有pip命令,先使用yum安裝; [root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance [root@controller ~]# keystone user-create --name=glance --pass=glance --email=glance@smoke.com +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | glance@smoke.com | | enabled | True | | id | 28f86c14cadb4fcbba64aabdc2f642e2 | | name | glance | | username | glance | +----------+----------------------------------+ [root@controller ~]# keystone user-role-add --user=glance --tenant=service --role=admin [root@controller ~]# keystone user-role-list --user=glance --tenant=service +----------------------------------+-------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+-------+----------------------------------+----------------------------------+ | 47debc9436274cd4b81375ab9948cf70 | admin | 28f86c14cadb4fcbba64aabdc2f642e2 | f37916ade2bc44adae440ab13f31d9cf | +----------------------------------+-------+----------------------------------+----------------------------------+ [root@controller ~]# vim /etc/glance/glance-api.conf [keystone_authtoken] auth_host=controller auth_uri=http://controller:5000 auth_port=35357 auth_protocol=http admin_tenant_name=service admin_user=glance admin_password=glance [paste_deploy] flavor=keystone [root@controller ~]# vim /etc/glance/glance-registry.conf [keystone_authtoken] auth_host=controller auth_uri=http://controller:5000 auth_port=35357 auth_protocol=http admin_tenant_name=service admin_user=glance admin_password=glance [paste_deploy] flavor=keystone [root@controller ~]# keystone service-create --name=glance --type=image --description="OpenStack Image Service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Image Service | | enabled | True | | id | dc7ece631f9143a784de300b4aab5ba0 | | name | glance | | type | image | +-------------+----------------------------------+ [root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:9292 | | id | 4763c25b0bbf4a46bdf713ab4d4d7d04 | | internalurl | http://controller:9292 | | publicurl | http://controller:9292 | | region | regionOne | | service_id | dc7ece631f9143a784de300b4aab5ba0 | +-------------+----------------------------------+ [root@controller ~]# for svc in api registry; do service openstack-glance-$svc start; chkconfig openstack-glance-$svc on; done [root@controller ~]# yum -y install qemu-img [root@controller ~]# ll total 23712 -rw-------. 1 root root 1285 Jan 10 05:44 anaconda-ks.cfg -rw-r--r-- 1 root root 11010048 Apr 6 21:59 cirros-no_cloud-0.3.0-i386-disk.img -rw-r--r-- 1 root root 11468800 Apr 6 21:59 cirros-no_cloud-0.3.0-x86_64-disk.img -rw-r--r-- 1 root root 1760426 Apr 6 20:39 get-pip.py -rw-r--r--. 1 root root 21867 Jan 10 05:44 install.log -rw-r--r--. 1 root root 5820 Jan 10 05:42 install.log.syslog [root@controller ~]# qemu-img info cirros-no_cloud-0.3.0-i386-disk.img image: cirros-no_cloud-0.3.0-i386-disk.img file format: qcow2 virtual size: 39M (41126400 bytes) disk size: 11M cluster_size: 65536 [root@controller ~]# qemu-img info cirros-no_cloud-0.3.0-x86_64-disk.img image: cirros-no_cloud-0.3.0-x86_64-disk.img file format: qcow2 virtual size: 39M (41126400 bytes) disk size: 11M cluster_size: 65536 [root@controller ~]# glance image-create --name=cirros-0.3.0-i386 --disk-format=qcow2 --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-i386-disk.img +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | ccdb7b71efb7cbae0ea4a437f55a5eb9 | | container_format | bare | | created_at | 2019-04-13T12:19:04 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 3ae56e48-9a1e-4efe-9535-3683359ab518 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.0-i386 | | owner | bbdc7fe3de4448b19e877902e8274736 | | protected | False | | size | 11010048 | | status | active | | updated_at | 2019-04-13T12:19:05 | | virtual_size | None | +------------------+--------------------------------------+ [root@controller ~]# glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-x86_64-disk.img +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 2b35be965df142f00026123a0fae4aa6 | | container_format | bare | | created_at | 2019-04-13T12:19:32 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 3eebb09a-4076-4504-87cb-608caf464aae | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.0-x86_64 | | owner | bbdc7fe3de4448b19e877902e8274736 | | protected | False | | size | 11468800 | | status | active | | updated_at | 2019-04-13T12:19:32 | | virtual_size | None | +------------------+--------------------------------------+ [root@controller ~]# glance image-list +--------------------------------------+---------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------+-------------+------------------+----------+--------+ | 3ae56e48-9a1e-4efe-9535-3683359ab518 | cirros-0.3.0-i386 | qcow2 | bare | 11010048 | active | | 3eebb09a-4076-4504-87cb-608caf464aae | cirros-0.3.0-x86_64 | qcow2 | bare | 11468800 | active | +--------------------------------------+---------------------+-------------+------------------+----------+--------+ [root@controller ~]# ls /var/lib/glance/images/ 3ae56e48-9a1e-4efe-9535-3683359ab518 3eebb09a-4076-4504-87cb-608caf464aae [root@controller ~]# glance image-show cirros-0.3.0-i386 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | ccdb7b71efb7cbae0ea4a437f55a5eb9 | | container_format | bare | | created_at | 2019-04-13T12:19:04 | | deleted | False | | disk_format | qcow2 | | id | 3ae56e48-9a1e-4efe-9535-3683359ab518 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.0-i386 | | owner | bbdc7fe3de4448b19e877902e8274736 | | protected | False | | size | 11010048 | | status | active | | updated_at | 2019-04-13T12:19:05 | +------------------+--------------------------------------+ [root@controller ~]# glance image-download --file=/tmp/cirros-0.3.0-i386.img --progress cirros-0.3.0-i386 [root@controller ~]# ls /tmp/ cirros-0.3.0-i386.img keystone-signing-IKuXj3 keystone-signing-MaBElz yum_save_tx-2019-04-12-21-58zwvqk8.yumtx安裝qpid消息隊列服務:
[root@controller ~]# yum -y install qpid-cpp-server [root@controller ~]# vim /etc/qpidd.conf auth=no [root@controller ~]# service qpidd start [root@controller ~]# chkconfig qpidd on [root@controller ~]# ss -tnlp | grep qpid LISTEN 0 10 :::5672 :::* users:(("qpidd",46435,13)) LISTEN 0 10 *:5672 *:* users:(("qpidd",46435,12))安裝nova:
controller:
compute1:
[root@compute1 ~]# grep -E -i --color=auto "(svm|vmx)" /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp l m constant_tsc arch_perfmon xtopology tsc_reliable nonstop_tsc unfair_spinlock pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popc nt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm arat tpr_shadow vnmi ept vpid fsgsbase smep flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp l m constant_tsc arch_perfmon xtopology tsc_reliable nonstop_tsc unfair_spinlock pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popc nt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm arat tpr_shadow vnmi ept vpid fsgsbase smep [root@compute1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo [openstack-icehouse] name=OpenStack Icehouse Repository baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6 enabled=1 skip_if_unavailable=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse priority=98 openstack-icehouse yum源出現:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/repodata/repomd.xml: [Errno 14] problem making ssl connection [root@compute1 ~]# mv /etc/yum.repos.d/openstack-Icehouse.repo /etc/yum.repos.d/openstack-Icehouse.repo.bak [root@compute1 ~]# yum -y update #方法一 [root@compute1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo #方法二 enabled=0 [root@compute1 ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak [root@compute1 ~]# yum -y install ca-certificates [root@compute1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo enabled=1 [root@compute1 ~]# yum -y update curl #方法三 [root@compute1 ~]# yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm epel yum源出現:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again [root@compute1 ~]# vim /etc/yum.repos.d/epel.repo #注釋mirrorlist,取消baseurl注釋; [root@compute1 ~]# vim /etc/yum.repos.d/epel-testing.repo [root@compute1 ~]# yum -y install openstack-nova-compute [root@compute1 ~]# vim /etc/nova/nova.conf [DEFAULT] qpid_hostname=192.168.10.6 rpc_backend=qpid auth_strategy=keystone glance_host=controller my_ip=192.168.10.7 novncproxy_base_url=http://controller:6080/vnc_auto.html vncserver_listen=0.0.0.0 vncserver_proxyclient_address=192.168.10.7 vnc_enabled=true vif_plugging_is_fatal=false vif_plugging_timeout=10 [database] connection=mysql://nova:nova@192.168.10.6/nova [keystone_authtoken] auth_uri=http://controller:5000 auth_host=controller auth_protocol=http auth_port=35357 admin_user=nova admin_tenant_name=service admin_password=nova [libvirt] virt_type=kvm [root@compute1 ~]# service libvirtd start [root@compute1 ~]# lsmod | grep kvm kvm_intel 55496 0 kvm 337772 1 kvm_intel [root@compute1 ~]# service messagebus start [root@compute1 ~]# service openstack-nova-compute start [root@compute1 ~]# chkconfig libvirtd on [root@compute1 ~]# chkconfig messagebus on [root@compute1 ~]# chkconfig openstack-nova-compute oncontroller: [root@controller ~]# nova hypervisor-list +----+---------------------+ | ID | Hypervisor hostname | +----+---------------------+ | 1 | compute1.smoke.com | +----+---------------------+安裝Neutron:
controller: [root@controller ~]# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 299 Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> CREATE DATABASE neutron CHARACTER SET 'utf8'; Query OK, 1 row affected (0.01 sec)mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller.smoke.com' IDENTIFIED BY 'neutron'; Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec)mysql> exit Bye [root@controller ~]# keystone user-create --name=neutron --pass=neutron --email=neutron@smoke.com +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | neutron@smoke.com | | enabled | True | | id | 42e66f0114c442c4af9d75595baca9c0 | | name | neutron | | username | neutron | +----------+----------------------------------+ [root@controller ~]# keystone user-role-add --user=neutron --tenant=service --role=admin [root@controller ~]# keystone user-role-list --user=neutron --tenant=service +----------------------------------+-------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+-------+----------------------------------+----------------------------------+ | 47debc9436274cd4b81375ab9948cf70 | admin | 42e66f0114c442c4af9d75595baca9c0 | f37916ade2bc44adae440ab13f31d9cf | +----------------------------------+-------+----------------------------------+----------------------------------+ [root@controller ~]# keystone service-create --name neutron --type network --description "OpenStack Networking" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | c6cd9124bf3f4b8b89728ca5aa1b92b7 | | name | neutron | | type | network | +-------------+----------------------------------+ [root@controller ~]# keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:9696 | | id | 55523c3b86ec4cc28e6bc6055ac79229 | | internalurl | http://controller:9696 | | publicurl | http://controller:9696 | | region | regionOne | | service_id | c6cd9124bf3f4b8b89728ca5aa1b92b7 | +-------------+----------------------------------+ [root@controller ~]# yum -y install openstack-neutron openstack-neutron-ml2 python-neutronclient [root@controller ~]# keystone tenant-list +----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | bbdc7fe3de4448b19e877902e8274736 | admin | True | | 9a748acebd0741f5bb6dc3875772cf0a | demo | True | | f37916ade2bc44adae440ab13f31d9cf | service | True | +----------------------------------+---------+---------+ [root@controller ~]# vim /etc/neutron/neutron.conf [DEFAULT] verbose = True auth_strategy = keystone rpc_backend=neutron.openstack.common.rpc.impl_qpid qpid_hostname = 192.168.10.6 notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2 nova_admin_username = nova nova_admin_tenant_id = f37916ade2bc44adae440ab13f31d9cf nova_admin_password = nova nova_admin_auth_url = http://controller:35357/v2.0 core_plugin = ml2 ervice_plugins = router [keystone_authtoken] identity_uri=http://controller:5000 auth_host=controller auth_protocol=http auth_port=35357 admin_user=neutron admin_tenant_name=service admin_password=neutron [database] connection = mysql://neutron:neutron@192.168.10.6:3306/neutron [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_gre] tunnel_id_ranges = 1:1000 [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@controller ~]# vim /etc/nova/nova.conf [DEFAULT] network_api_class=nova.network.neutronv2.api.API neutron_url=http://controller:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=neutron neutron_admin_auth_url=http://controller:35357/v2.0 linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver=nova.virt.firewall.NoopFirewallDriver security_group_api=neutron [root@controller ~]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller ~]# service openstack-nova-api restart [root@controller ~]# service openstack-nova-scheduler restart [root@controller ~]# service openstack-nova-conductor restart [root@controller ~]# service neutron-server start [root@controller ~]# chkconfig neutron-server onnetwork:
[root@network ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo [openstack-icehouse] name=OpenStack Icehouse Repository baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6 enabled=1 skip_if_unavailable=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse priority=98 openstack-icehouse yum源出現:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/repodata/repomd.xml: [Errno 14] problem making ssl connection [root@network ~]# mv /etc/yum.repos.d/openstack-Icehouse.repo /etc/yum.repos.d/openstack-Icehouse.repo.bak [root@network ~]# yum -y update #方法一 [root@network ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo #方法二 enabled=0 [root@network ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak [root@network ~]# yum -y install ca-certificates [root@network ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo enabled=1 [root@network ~]# yum -y update curl #方法三 [root@network ~]# yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm epel yum源出現:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again [root@network ~]# vim /etc/yum.repos.d/epel.repo #注釋mirrorlist,取消baseurl注釋; [root@network ~]# vim /etc/yum.repos.d/epel-testing.repo [root@network ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 0 net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 0 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 [root@network ~]# sysctl -p [root@network ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch [root@network ~]# vim /etc/neutron/neutron.conf [DEFAULT] auth_strategy = keystone rpc_backend=neutron.openstack.common.rpc.impl_qpid qpid_hostname = 192.168.10.6 core_plugin = ml2 service_plugins = router [keystone_authtoken] identity_uri=http://controller:5000 auth_host=controller auth_protocol=http auth_port=35357 admin_user=neutron admin_tenant_name=service admin_password=neutron [root@network ~]# vim /etc/neutron/l3_agent.ini [DEFAULT] verbose = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True [root@network ~]# vim /etc/neutron/dhcp_agent.ini [DEFAULT] verbose = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf [root@network ~]# vim /etc/neutron/dnsmasq-neutron.conf dhcp-option-force=26,1454 [root@network ~]# vim /etc/neutron/metadata_agent.ini [DEFAULT] verbose = True auth_url = http://controller:5000/v2.0 auth_region = RegionOne admin_tenant_name = service admin_user = neutron admin_password = neutron nova_metadata_ip = controller metadata_proxy_shared_secret = METADATA_SECRETcontroller:
[root@controller ~]# vim /etc/nova/nova.conf [DEFAULT] service_neutron_metadata_proxy=true neutron_metadata_proxy_shared_secret=METADATA_SECRET [root@controller ~]# service openstack-nova-api restartnetwork:
[root@network ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no IPADDR=192.168.20.254 NETMASK=255.255.255.0 BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth1" [root@network ~]# ifdown eth1 && ifup eth1 [root@network ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_gre] tunnel_id_ranges = 1:1000 [ovs] local_ip = 192.168.20.254 tunnel_type = gre enable_tunneling = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@network ~]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@network ~]# service openvswitch start [root@network ~]# chkconfig openvswitch on [root@network ~]# ovs-vsctl add-br br-in [root@network ~]# ovs-vsctl add-br br-ex [root@network ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2 DEVICE=eth2 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth2" [root@network ~]# service network restart [root@network ~]# ovs-vsctl add-port br-ex eth2 [root@network ~]# ovs-vsctl show 0d8784ce-e5b5-4416-8212-738bc6094d82Bridge br-inPort br-inInterface br-intype: internalBridge br-exPort br-exInterface br-extype: internalPort "eth2"Interface "eth2"ovs_version: "2.1.3" [root@network ~]# ovs-vsctl br-set-external-id br-ex bridge-id br-ex [root@network ~]# ethtool -K eth2 gro off #關閉外部網卡gro功能,開啟會導致性能底下; [root@network ~]# ifconfig br-ex 192.168.243.129 netmask 255.255.255.0 [root@network ~]# route add -net 0.0.0.0 netmask 0.0.0.0 gw 192.168.243.2 [root@network ~]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig #由于bug原因執行下面操作 [root@network ~]# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent [root@network ~]# for svc in openvswitch l3 dhcp metadata; do service neutron-${svc}-agent start; chkconfig neutron-${svc}-agent on; donecompute1:
[root@compute1 ~]# vim /etc/sysctl.conf net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 0 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@compute1 ~]# sysctl -p [root@compute1 ~]# yum -y install openstack-neutron-ml2 openstack-neutron-openvswitch #多個compute節點都需要做出如此配置 [root@compute1 ~]# vim /etc/neutron/neutron.conf [DEFAULT] auth_strategy = keystone rpc_backend=neutron.openstack.common.rpc.impl_qpid qpid_hostname = 192.168.10.6 core_plugin = ml2 service_plugins = router [keystone_authtoken] identity_uri=http://controller:5000 auth_host=controller auth_protocol=http auth_port=35357 admin_user=neutron admin_tenant_name=service admin_password=neutron [root@compute1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_gre] tunnel_id_ranges = 1:1000 [ovs] local_ip = 192.168.20.1 tunnel_type = gre enable_tunneling = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@compute1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none IPADDR=192.168.20.1 NETMASK=255.255.255.0 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth1" [root@compute1 ~]# ifdown eth1 && ifup eth1 [root@compute1 ~]# service openvswitch start [root@compute1 ~]# chkconfig openvswitch on [root@compute1 ~]# ovs-vsctl add-br br-in [root@compute1 ~]# vim /etc/nova/nova.conf [DEFAULT] network_api_class=nova.network.neutronv2.api.API neutron_url=http://controller:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=neutron neutron_admin_auth_url=http://controller:5000/v2.0 linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver=nova.virt.firewall.NoopFirewallDriver security_group_api=neutron [root@compute1 ~]# ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@compute1 ~]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig [root@compute1 ~]# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent [root@compute1 ~]# service openstack-nova-compute restart [root@compute1 ~]# service neutron-openvswitch-agent start [root@compute1 ~]# chkconfig neutron-openvswitch-agent on 問題: 啟動neutron-openvswitch-agent后查看運行狀態為dead but pid file exists,是由于/etc/neutron/plugins/ml2/ml2_conf.ini屬組為root,需要更改為neutron; [root@compute1 ~]# chown root:neutron /etc/neutron/plugins/ml2/ml2_conf.inicontroller:
[root@controller ~]# . .admin-openrc.sh [root@controller ~]# neutron net-list [root@controller ~]# neutron subnet-list [root@controller ~]# neutron net-create ext-net --shared --router:external=True #創建外部網絡 Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 38503f84-9675-4813-b9a4-7548d9ebf0b6 | | name | ext-net | | provider:network_type | gre | | provider:physical_network | | | provider:segmentation_id | 1 | | router:external | True | | shared | True | | status | ACTIVE | | subnets | | | tenant_id | bbdc7fe3de4448b19e877902e8274736 | +---------------------------+--------------------------------------+ [root@controller ~]# neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.243.151,end=192.168.243.170 --disable-dhcp --gateway 192.168.243.2 192.168.243.0/24 Created a new subnet: +------------------+--------------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------------+ | allocation_pools | {"start": "192.168.243.151", "end": "192.168.243.170"} | | cidr | 192.168.243.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 192.168.243.2 | | host_routes | | | id | ed204ed4-5752-4b75-8459-c0a913e92bc0 | | ip_version | 4 | | name | ext-subnet | | network_id | 38503f84-9675-4813-b9a4-7548d9ebf0b6 | | tenant_id | bbdc7fe3de4448b19e877902e8274736 | +------------------+--------------------------------------------------------+ [root@controller ~]# keystone tenant-list +----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | bbdc7fe3de4448b19e877902e8274736 | admin | True | | 9a748acebd0741f5bb6dc3875772cf0a | demo | True | | f37916ade2bc44adae440ab13f31d9cf | service | True | +----------------------------------+---------+---------+ [root@controller ~]# vim .demo-os.sh export OS_USERNAME=demo export OS_PASSWORD=demo export OS_TENANT_NAME=demo export OS_AUTH_URL=http://controller:35357/v2.0 [root@controller ~]# keystone user-list +----------------------------------+---------+---------+-------------------+ | id | name | enabled | email | +----------------------------------+---------+---------+-------------------+ | ece7cbced7b84c9c917aac88ee7bd8a1 | admin | True | admin@smoke.com | | 0dba9dc5d4ff414ebfd1b5844d7b92a3 | demo | True | demo@smoke.com | | 28f86c14cadb4fcbba64aabdc2f642e2 | glance | True | glance@smoke.com | | 42e66f0114c442c4af9d75595baca9c0 | neutron | True | neutron@smoke.com | | 2426c4cb03f643cc8a896aa2420c1644 | nova | True | nova@smoke.com | +----------------------------------+---------+---------+-------------------+ [root@controller ~]# . .demo-os.sh [root@controller ~]# neutron net-list +--------------------------------------+---------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+-------------------------------------------------------+ | 38503f84-9675-4813-b9a4-7548d9ebf0b6 | ext-net | ed204ed4-5752-4b75-8459-c0a913e92bc0 192.168.243.0/24 | +--------------------------------------+---------+-------------------------------------------------------+ [root@controller ~]# neutron subnet-list +--------------------------------------+------------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+------------+------------------+--------------------------------------------------------+ | ed204ed4-5752-4b75-8459-c0a913e92bc0 | ext-subnet | 192.168.243.0/24 | {"start": "192.168.243.151", "end": "192.168.243.170"} | +--------------------------------------+------------+------------------+--------------------------------------------------------+ [root@controller ~]# neutron net-create demo-net Created a new network: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | admin_state_up | True | | id | 6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2 | | name | demo-net | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 9a748acebd0741f5bb6dc3875772cf0a | +----------------+--------------------------------------+ [root@controller ~]# neutron subnet-create demo-net --name demo-subnet --gateway 192.168.30.254 192.168.30.0/24 Created a new subnet: +------------------+----------------------------------------------------+ | Field | Value | +------------------+----------------------------------------------------+ | allocation_pools | {"start": "192.168.30.1", "end": "192.168.30.253"} | | cidr | 192.168.30.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 192.168.30.254 | | host_routes | | | id | 120c1dcd-dd7f-4222-a42a-b247bf4b7bad | | ip_version | 4 | | name | demo-subnet | | network_id | 6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2 | | tenant_id | 9a748acebd0741f5bb6dc3875772cf0a | +------------------+----------------------------------------------------+network:
[root@network ~]# yum -y update iproute #沒有ip netns命名空間需要升級iproute; [root@network ~]# ip netns listcontroller:
[root@controller ~]# neutron router-create demo-router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 | | name | demo-router | | status | ACTIVE | | tenant_id | 9a748acebd0741f5bb6dc3875772cf0a | +-----------------------+--------------------------------------+ [root@controller ~]# neutron router-gateway-set demo-router ext-net #添加外部網絡到路由器 [root@controller ~]# . .admin-openrc.sh [root@controller ~]# neutron router-port-list demo-router +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | add5d94c-214f-4caa-967b-5311969c253a | | fa:16:3e:34:37:7a | {"subnet_id": "ed204ed4-5752-4b75-8459-c0a913e92bc0", "ip_address": "192.168.243.151"} | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ [root@controller ~]# . .demo-os.sh [root@controller ~]# neutron router-interface-add demo-router demo-subnet [root@controller ~]# . .admin-openrc.sh [root@controller ~]# neutron router-port-list demo-router [root@controller ~]# neutron router-port-list demo-router +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | 77e009c7-8806-4440-90db-f328465bc35c | | fa:16:3e:54:4e:3b | {"subnet_id": "120c1dcd-dd7f-4222-a42a-b247bf4b7bad", "ip_address": "192.168.30.254"} | | add5d94c-214f-4caa-967b-5311969c253a | | fa:16:3e:34:37:7a | {"subnet_id": "ed204ed4-5752-4b75-8459-c0a913e92bc0", "ip_address": "192.168.243.151"} | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+network:
[root@network ~]# ip netns list qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 [root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 ifconfig lo Link encap:Local Loopbackinet addr:127.0.0.1 Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING MTU:65536 Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)qg-add5d94c-21 Link encap:Ethernet HWaddr FA:16:3E:34:37:7Ainet addr:192.168.243.151 Bcast:192.168.243.255 Mask:255.255.255.0inet6 addr: fe80::f816:3eff:fe34:377a/64 Scope:LinkUP BROADCAST RUNNING MTU:1500 Metric:1RX packets:56 errors:0 dropped:0 overruns:0 frame:0TX packets:8 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:6243 (6.0 KiB) TX bytes:636 (636.0 b)qr-77e009c7-88 Link encap:Ethernet HWaddr FA:16:3E:54:4E:3Binet addr:192.168.30.254 Bcast:192.168.30.255 Mask:255.255.255.0inet6 addr: fe80::f816:3eff:fe54:4e3b/64 Scope:LinkUP BROADCAST RUNNING MTU:1500 Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:7 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 b) TX bytes:558 (558.0 b) [root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination neutron-l3-agent-INPUT all -- 0.0.0.0/0 0.0.0.0/0Chain FORWARD (policy ACCEPT) target prot opt source destination neutron-filter-top all -- 0.0.0.0/0 0.0.0.0/0 neutron-l3-agent-FORWARD all -- 0.0.0.0/0 0.0.0.0/0Chain OUTPUT (policy ACCEPT) target prot opt source destination neutron-filter-top all -- 0.0.0.0/0 0.0.0.0/0 neutron-l3-agent-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0Chain neutron-filter-top (2 references) target prot opt source destination neutron-l3-agent-local all -- 0.0.0.0/0 0.0.0.0/0Chain neutron-l3-agent-FORWARD (1 references) target prot opt source destinationChain neutron-l3-agent-INPUT (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 127.0.0.1 tcp dpt:9697Chain neutron-l3-agent-OUTPUT (1 references) target prot opt source destinationChain neutron-l3-agent-local (1 references) target prot opt source destination安裝dashboard:
controller: [root@controller ~]# yum -y install memcached python-memcached mod_wsgi openstack-dashboard 問題: 安裝報錯,提示Requires: Django14 [root@controller ~]# yum -y localinstall Django14-1.4.20-1.el6.noarch.rpm [root@controller ~]# service memcached start [root@controller ~]# chkconfig memcached on [root@controller ~]# vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = "controller" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_" ALLOWED_HOSTS = ['*'] CACHES = {'default': {'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache','LOCATION' : '192.168.10.6:11211',} } #CACHES = { # 'default': { # 'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache' # } #} TIME_ZONE = "Asia/Shanghai" [root@controller ~]# service httpd start [root@controller ~]# chkconfig httpd on通過瀏覽器訪問http://192.168.10.6 用戶名admin,密碼admin;
問題:dashbord點擊實例菜單提示錯誤:無法連接到Neutron;我們在Icehouse版本創建虛擬機會遇到錯誤:無法連接到Neutron.的報錯,但是虛擬機還可以創建成功,這個是一個已知的bug,可以通過修改源碼解決。
[root@controller ~]# vim /usr/share/openstack-dashboard/openstack_dashboard/api/neutron.pydef is_simple_associate_supported(self):# NOTE: There are two reason that simple association support# needs more considerations. (1) Neutron does not support the# default floating IP pool at the moment. It can be avoided# in case where only one floating IP pool exists.# (2) Neutron floating IP is associated with each VIF and# we need to check whether such VIF is only one for an instance# to enable simple association support.return Falsedef is_supported(self):network_config = getattr(settings, 'OPENSTACK_NEUTRON_NETWORK', {})return network_config.get('enable_router', True)啟動實例:
controller:
network:
問題:WARNING neutron.agent.linux.dhcp [req-8e9c51e2-b0c7-47c2-b0eb-20b008139c9d None] FAILED VERSION REQUIREMENT FOR DNSMASQ. DHCP AGENT MAY NOT RUN CORRECTLY! Please ensure that its version is 2.59 or above! 虛擬機無法獲取ip地址; [root@network ~]# rpm -Uvh dnsmasq-2.65-1.el6.rfx.x86_64.rpm compute1: [root@compute1 ~]# virsh listId Name State ----------------------------------------------------5 instance-0000000d running [root@compute1 ~]# yum -y install tigervnc [root@compute1 ~]# vncviewer :5900虛擬機實例獲取的ip地址為192.168.30.6;
network:
[root@network ~]# ip netns list qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 qdhcp-6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2 [root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 ifconfig lo Link encap:Local Loopbackinet addr:127.0.0.1 Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING MTU:65536 Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)qg-add5d94c-21 Link encap:Ethernet HWaddr FA:16:3E:34:37:7Ainet addr:192.168.243.151 Bcast:192.168.243.255 Mask:255.255.255.0inet6 addr: fe80::f816:3eff:fe34:377a/64 Scope:LinkUP BROADCAST RUNNING MTU:1500 Metric:1RX packets:21894 errors:0 dropped:0 overruns:0 frame:0TX packets:8 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:2443652 (2.3 MiB) TX bytes:636 (636.0 b)qr-77e009c7-88 Link encap:Ethernet HWaddr FA:16:3E:54:4E:3Binet addr:192.168.30.254 Bcast:192.168.30.255 Mask:255.255.255.0inet6 addr: fe80::f816:3eff:fe54:4e3b/64 Scope:LinkUP BROADCAST RUNNING MTU:1500 Metric:1RX packets:30 errors:0 dropped:0 overruns:0 frame:0TX packets:10 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:3204 (3.1 KiB) TX bytes:740 (740.0 b) [root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 ping 192.168.30.2 #ping不通是由于安全組沒有放通; PING 192.168.30.2 (192.168.30.2) 56(84) bytes of data. From 192.168.30.254 icmp_seq=2 Destination Host Unreachable From 192.168.30.254 icmp_seq=3 Destination Host Unreachable From 192.168.30.254 icmp_seq=4 Destination Host Unreachable ^C --- 192.168.30.2 ping statistics --- 4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3589mscontroller:
[root@controller ~]# nova get-vnc-console 7745576f-9cd0-48ec-948d-1082485996ad novnc +-------+---------------------------------------------------------------------------------+ | Type | Url | +-------+---------------------------------------------------------------------------------+ | novnc | http://controller:6080/vnc_auto.html?token=d54b0928-d846-4945-a69c-ffa4687ff0ca | +-------+---------------------------------------------------------------------------------+ [root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ [root@controller ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+通過虛擬機實例訪問外部網絡;
network:
[root@network ~]# ovs-vsctl show 0d8784ce-e5b5-4416-8212-738bc6094d82Bridge br-intfail_mode: securePort "tap7835305c-ed"tag: 1Interface "tap7835305c-ed"type: internalPort "qr-77e009c7-88"tag: 1Interface "qr-77e009c7-88"type: internalPort patch-tunInterface patch-tuntype: patchoptions: {peer=patch-int}Port br-intInterface br-inttype: internalBridge br-inPort br-inInterface br-intype: internalBridge br-tunPort "gre-c0a81401"Interface "gre-c0a81401"type: greoptions: {in_key=flow, local_ip="192.168.20.254", out_key=flow, remote_ip="192.168.20.1"}Port patch-intInterface patch-inttype: patchoptions: {peer=patch-tun}Port br-tunInterface br-tuntype: internalBridge br-exPort "qg-add5d94c-21"Interface "qg-add5d94c-21"type: internalPort br-exInterface br-extype: internalPort "eth2"Interface "eth2"ovs_version: "2.1.3" [root@network ~]# ip netns list qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 qdhcp-6f92d6ca-fa6e-4d47-b34b-d5c4c72552f2 [root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination neutron-l3-agent-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0Chain POSTROUTING (policy ACCEPT) target prot opt source destination neutron-l3-agent-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 neutron-postrouting-bottom all -- 0.0.0.0/0 0.0.0.0/0Chain OUTPUT (policy ACCEPT) target prot opt source destination neutron-l3-agent-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0Chain neutron-l3-agent-OUTPUT (1 references) target prot opt source destinationChain neutron-l3-agent-POSTROUTING (1 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ! ctstate DNATChain neutron-l3-agent-PREROUTING (1 references) target prot opt source destination REDIRECT tcp -- 0.0.0.0/0 169.254.169.254 tcp dpt:80 redir ports 9697Chain neutron-l3-agent-float-snat (1 references) target prot opt source destinationChain neutron-l3-agent-snat (1 references) target prot opt source destination neutron-l3-agent-float-snat all -- 0.0.0.0/0 0.0.0.0/0 SNAT all -- 192.168.30.0/24 0.0.0.0/0 to:192.168.243.151Chain neutron-postrouting-bottom (1 references) target prot opt source destination neutron-l3-agent-snat all -- 0.0.0.0/0 0.0.0.0/0創建floating-ip,讓外部主機可以直接訪問虛擬機實例:
controller:
network:
[root@network ~]# ip netns exec qrouter-3d58f63c-55e9-42a7-ba9d-82cd5b8c0d94 iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination neutron-l3-agent-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0Chain POSTROUTING (policy ACCEPT) target prot opt source destination neutron-l3-agent-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 neutron-postrouting-bottom all -- 0.0.0.0/0 0.0.0.0/0Chain OUTPUT (policy ACCEPT) target prot opt source destination neutron-l3-agent-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0Chain neutron-l3-agent-OUTPUT (1 references) target prot opt source destination DNAT all -- 0.0.0.0/0 192.168.243.153 to:192.168.30.6Chain neutron-l3-agent-POSTROUTING (1 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ! ctstate DNATChain neutron-l3-agent-PREROUTING (1 references) target prot opt source destination REDIRECT tcp -- 0.0.0.0/0 169.254.169.254 tcp dpt:80 redir ports 9697 DNAT all -- 0.0.0.0/0 192.168.243.153 to:192.168.30.6Chain neutron-l3-agent-float-snat (1 references) target prot opt source destination SNAT all -- 192.168.30.6 0.0.0.0/0 to:192.168.243.153Chain neutron-l3-agent-snat (1 references) target prot opt source destination neutron-l3-agent-float-snat all -- 0.0.0.0/0 0.0.0.0/0 SNAT all -- 192.168.30.0/24 0.0.0.0/0 to:192.168.243.151Chain neutron-postrouting-bottom (1 references) target prot opt source destination neutron-l3-agent-snat all -- 0.0.0.0/0 0.0.0.0/0在windows外部主機ping虛擬機floating-ip;
[Smoke.Smoke-PC] ? ping 192.168.243.153正在 Ping 192.168.243.153 具有 32 字節的數據: 來自 192.168.243.153 的回復: 字節=32 時間=1ms TTL=63 來自 192.168.243.153 的回復: 字節=32 時間=1ms TTL=63 來自 192.168.243.153 的回復: 字節=32 時間=1ms TTL=63 來自 192.168.243.153 的回復: 字節=32 時間=1ms TTL=63192.168.243.153 的 Ping 統計信息:數據包: 已發送 = 4,已接收 = 4,丟失 = 0 (0% 丟失), 往返行程的估計時間(以毫秒為單位):最短 = 1ms,最長 = 1ms,平均 = 1mscompute1:
[root@compute1 ~]# tcpdump -i tap6cbd64f0-94 -nne icmp tcpdump: WARNING: tap6cbd64f0-94: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap6cbd64f0-94, link-type EN10MB (Ethernet), capture size 65535 bytes 21:25:33.770506 fa:16:3e:f9:17:63 > fa:16:3e:54:4e:3b, ethertype IPv4 (0x0800), length 98: 192.168.30.6 > 192.168.243.2: ICMP echo request, id 47104, seq 684, length 64 21:25:33.770891 fa:16:3e:54:4e:3b > fa:16:3e:f9:17:63, ethertype IPv4 (0x0800), length 98: 192.168.243.2 > 192.168.30.6: ICMP echo reply, id 47104, seq 684, length 64 21:25:34.305662 fa:16:3e:54:4e:3b > fa:16:3e:f9:17:63, ethertype IPv4 (0x0800), length 74: 192.168.243.1 > 192.168.30.6: ICMP echo request, id 1, seq 309, length 40 21:25:34.306095 fa:16:3e:f9:17:63 > fa:16:3e:54:4e:3b, ethertype IPv4 (0x0800), length 74: 192.168.30.6 > 192.168.243.1: ICMP echo reply, id 1, seq 309, length 40 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernelstor1:
[root@stor1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo [openstack-icehouse] name=OpenStack Icehouse Repository baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6 enabled=1 skip_if_unavailable=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse priority=98 openstack-icehouse yum源出現:https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/repodata/repomd.xml: [Errno 14] problem making ssl connection [root@stor1 ~]# mv /etc/yum.repos.d/openstack-Icehouse.repo /etc/yum.repos.d/openstack-Icehouse.repo.bak [root@stor1 ~]# yum -y update [root@stor1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo #方法二 enabled=0 [root@stor1 ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak [root@stor1 ~]# yum -y install ca-certificates [root@stor1 ~]# vim /etc/yum.repos.d/openstack-Icehouse.repo enabled=1 [root@stor1 ~]# yum -y update curl #方法三 [root@stor1 ~]# yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm epel yum源出現:Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again [root@stor1 ~]# vim /etc/yum.repos.d/epel.repo #注釋mirrorlist,取消baseurl注釋; [root@stor1 ~]# vim /etc/yum.repos.d/epel-testing.repo安裝cinder:
controller:
stor1:
[root@stor1 ~]# pvcreate /dev/sdb [root@stor1 ~]# vgcreate cinder-volumes /dev/sdb [root@stor1 ~]# vim /etc/lvm/lvm.conf devices { filter = [ "a/sda1/", "a/sdb/","r/.*/" ] } [root@stor1 ~]# yum -y install openstack-cinder scsi-target-utils [root@stor1 ~]# vim /etc/cinder/cinder.conf [DEFAULT] auth_strategy=keystone rpc_backend=qpid qpid_hostname=192.168.10.6 my_ip=192.168.10.9 glance_host=controller iscsi_helper=tgtadm volumes_dir=/etc/cinder/volumes [keystone_authtoken] identity_uri=http://controller:5000 auth_host=controller auth_protocol=http auth_port=35357 admin_user=cinder admin_tenant_name=service admin_password=cinder [database] connection=mysql://cinder:cinder@192.168.10.6/cinder [root@stor1 ~]# vim /etc/tgt/targets.conf include /etc/cinder/volumes/* [root@stor1 ~]# vim /etc/init.d/openstack-cinder-volume #啟動腳本存在問題,需要修改,去掉distconfig配置文件; start() {[ -x $exec ] || exit 5[ -f $config ] || exit 6echo -n $"Starting $prog: "daemon --user cinder --pidfile $pidfile "$exec --config-file $config --logfile $logfile &>/dev/null & echo \$! > $pidfile"retval=$?echo[ $retval -eq 0 ] && touch $lockfilereturn $retval } [root@stor1 ~]# service openstack-cinder-volume start [root@stor1 ~]# service tgtd start [root@stor1 ~]# chkconfig openstack-cinder-volume on [root@stor1 ~]# chkconfig tgtd oncontroller:
[root@controller ~]# . .demo-os.sh [root@controller ~]# cinder create --display-name testVolume 2 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2019-04-30T13:58:00.551667 | | display_description | None | | display_name | testVolume | | encrypted | False | | id | a21403a0-7891-4d4a-b27d-daa7070be4d7 | | metadata | {} | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ [root@controller ~]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | a21403a0-7891-4d4a-b27d-daa7070be4d7 | available | testVolume | 2 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+stor1:
[root@stor1 ~]# lvsLV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convertvolume-a21403a0-7891-4d4a-b27d-daa7070be4d7 cinder-volumes -wi-a----- 2.00gcontroller:
[root@controller ~]# . .demo-os.sh [root@controller ~]# nova list +--------------------------------------+-----------+--------+------------+-------------+----------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-----------+--------+------------+-------------+----------------------------------------+ | 395a24e4-d91d-46b7-a28b-8b81bf6e6fa4 | demo-0001 | ACTIVE | - | Running | demo-net=192.168.30.6, 192.168.243.153 | +--------------------------------------+-----------+--------+------------+-------------+----------------------------------------+ [root@controller ~]# nova volume-attach demo-0001 a21403a0-7891-4d4a-b27d-daa7070be4d7 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | a21403a0-7891-4d4a-b27d-daa7070be4d7 | | serverId | 395a24e4-d91d-46b7-a28b-8b81bf6e6fa4 | | volumeId | a21403a0-7891-4d4a-b27d-daa7070be4d7 | +----------+--------------------------------------+在虛擬機實例demo-0001查看添加的硬盤;
controller:
[root@controller ~]# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | a21403a0-7891-4d4a-b27d-daa7070be4d7 | in-use | testVolume | 2 | None | false | 395a24e4-d91d-46b7-a28b-8b81bf6e6fa4 | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ [root@controller ~]# nova volume-detach demo-0001 a21403a0-7891-4d4a-b27d-daa7070be4d7 [root@controller ~]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | a21403a0-7891-4d4a-b27d-daa7070be4d7 | available | testVolume | 2 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+轉載于:https://blog.51cto.com/smoke520/2300323
總結
以上是生活随笔為你收集整理的openstack-Icehouse版本部署安装的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: P2017 [USACO09DEC]晕牛
- 下一篇: jmeter固定定时器使用与思考