Руководство по установке и запуску OpenStack в ALT Linux p8
Инструкция по мотивам установки на Redhat: https://docs.openstack.org/newton/install-guide-rdo/
Инструкция в разработке.
Минимальные требования к оборудованию
- Процессорных ядер - одно;
- Оперативная память от 4Gb;
- Диск 20 Гб.
* На машине с 2Gb RAM - сталкивался с нехваткой памяти и падением процессов.
Пример установки с сетевым модулем (neutron) на управляющем узле (controller)
Сетевые интерфейсы:
- ens19 - интерфейс управляющей сети openstack (10.0.0.0/24)
- ens20 - "provider interface" параметры в этом руководстве используется диапазон 203.0.113.101-203.0.113.250, в сети 203.0.113.0/24, шлюз 203.0.113.1
Установка управляющего узла
Добавляем на узле в /etc/hosts (не удаляйте хост 127.0.0.1)
# Управляющий узел 10.0.0.11 controller # Вычислительный узел 10.0.0.31 compute1
Подготовка к установке
# apt-get update -y # apt-get dist-upgrade
- Удаление firewalld
apt-get remove firewalld
Установка ПО
# apt-get install python-module-pymysql openstack-nova chrony python-module-memcached python3-module-memcached python-module-pymemcache python3-module-pymemcache mariadb-server python-module-MySQLdb python-module-openstackclient openstack-glance python-module-glance python-module-glance_store python-module-glanceclient python-module-glanceclient python-module-glance_store python-module-glance openstack-glance openstack-nova-api openstack-nova-cells openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-scheduler rabbitmq-server openstack-keystone apache2-mod_wsgi memcached openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge openstack-neutron-l3-agent openstack-neutron-dhcp-agent openstack-neutron-server openstack-neutron-metadata-agent openstack-dashboard spice-html5 openstack-nova-spicehtml5proxy mongo-server-mongod mongo-tools python-module-pymongo
Настройка времени
в /etc/chrony.conf добавляем
allow 10.0.0.0/24
Если имеется настроенный свой NTP, заменяем "pool.ntp.org" на свой.
pool pool.ntp.org iburst
# systemctl enable chronyd.service Synchronizing state of chronyd.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable chronyd # systemctl start chronyd.service
Настройка sql сервера
Комментируем строку "skip-networking" в /etc/my.cnf.d/server.cnf
# cat > /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
# systemctl enable mariadb # systemctl start mariadb
задаем пароль администратора sql сервера root и удаляем тестовые таблички
- пароль по умолчанию пустой "" (после ввода нового пароля, на все вопросы отвечать утвердительно)
# mysql_secure_installation
Настройка сервера сообщений rabbitmq
# systemctl enable rabbitmq.service # systemctl start rabbitmq
Добавляем пользователя:
# rabbitmqctl add_user openstack RABBIT_PASS # rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Настройка memcached
в файле /etc/sysconfig/memcached заменяем строчку LISTEN="127.0.0.1" на
LISTEN="10.0.0.11"
# systemctl enable memcached # systemctl start memcached
Настройка Keystone
Создаём базу данных и пользователя с паролем.
# mysql -u root -p CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
Сохраняем оригинальный конфигурационный файл.
# mv /etc/keystone/keystone.conf /etc/keystone/keystone.conf.orig
# cat > /etc/keystone/keystone.conf [DEFAULT] admin_token = ADMIN_TOKEN [assignment] [auth] [cache] [catalog] [cors] [cors.subdomain] [credential] [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [domain_config] [endpoint_filter] [endpoint_policy] [eventlet_server] [eventlet_server_ssl] [federation] [fernet_tokens] [identity] [identity_mapping] [kvs] [ldap] [matchmaker_redis] [memcache] [oauth1] [os_inherit] [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_middleware] [oslo_policy] [paste_deploy] [policy] [resource] [revoke] [role] [saml] [shadow_users] [signing] [ssl] [token] provider = fernet [tokenless_auth] [trust]
Заполняем базу данных keystone
# su -s /bin/sh -c "keystone-manage db_sync" keystone
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
Настраиваем apache2 для keystone
убираем в файле /etc/httpd2/conf/sites-available/openstack-keystone.conf всё строчки c IfVersion (
<IfVersion >= 2.4> </IfVersion>
добавляем в активную конфигурацию keystone
# a2ensite openstack-keystone
Добавляем servername в конфигурацию.
echo ServerName controller >/etc/httpd2/conf/sites-enabled/servername.conf
systemctl enable httpd2.service systemctl start httpd2.service
Создание доменов, пользователей и ролей
Для дальнейших работ рекомендуется создать пользователя.
# adduser admin # su - admin
cat >auth
export OS_TOKEN=ADMIN_TOKEN export OS_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3
Создаём пользователя demo
# su - admin . auth openstack service create --name keystone --description "OpenStack Identity" identity
Пароль для пользователя admin - ADMIN_PASS
openstack endpoint create --region RegionOne identity public http://controller:5000/v3 openstack endpoint create --region RegionOne identity internal http://controller:5000/v3 openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
openstack domain create --description "Default Domain" default openstack project create --domain default --description "Admin Project" admin openstack user create --domain default --password ADMIN_PASS admin openstack role create admin openstack role add --project admin --user admin admin
openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" demo openstack user create --domain default --password demo demo openstack role create user openstack role add --project demo --user demo user
Настройка окружения
# systemctl restart httpd2.service
# su - admin rm auth
cat > admin-openrc <<EOF export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF
cat > demo-openrc <<EOF export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=demo export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF
Проверка окружения
su - admin . admin-openrc openstack token issue
Должно выдать что-то вроде такого:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2017-05-16T15:48:13.101936Z | | id | gAAAAABZGxEtWlJ0eEGve9Y1VvIRk-wQtZN128A92YPFb5iuTJuo2O7G6Gd9IYdnyPZP6xAXDmT2VzIVbuhvOKQi9bItygi2fWRTw7byAZZdKIvR3mAHpsZyLPpS61hM2ydQLsf6g57xhMKy5y1Fw4Z3uXPabK27dZi1aTslIQZB4RA4Q9WZYWM | | project_id | d22531fa71e849078c44bb1f00117d87 | | user_id | 7be0608abb9641c5bd8d9f7a3bf519cb | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Настройка сервиса glance
mysql -u root -p CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS'; su - admin . admin-openrc
Задаем пароль сервису glance
openstack user create --domain default --password GLANCE_PASS glance openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image" image openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292
настраиваем конфиг:
cd /etc/glance/ mv glance-api.conf glance-api.conf_orig cat >glance-api.conf <<EOF [DEFAULT] [cors] [cors.subdomain] [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ [image_format] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [matchmaker_redis] [oslo_concurrency] [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_policy] [paste_deploy] flavor = keystone [profiler] [store_type_location_strategy] [task] [taskflow_executor] EOF
mv /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.orig cat > /etc/glance/glance-registry.conf <<EOF [DEFAULT] [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [glance_store] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [matchmaker_redis] [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_policy] [paste_deploy] flavor = keystone [profiler] EOF
# su -s /bin/sh -c "glance-manage db_sync" glance systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service
Проверка
su - admin $ . admin-openrc $ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Загружаем образ в glance.
$ openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
Проверяем успешность загрузки
$ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | f1008c6a-f86a-4c48-8332-2573321e4be1 | cirros | active | +--------------------------------------+--------+--------+
Установка вычислительного узла
Начальная подготовка управляющего узла
Создание БД.
mysql -u root -p CREATE DATABASE nova_api; CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Создаём пользователя nova и указываем пароль, который потом будет использоваться при настройке.
openstack user create --domain default --password NOVA_PASS nova
создаём роль
openstack role add --project service --user nova admin
Создаём сервис nova
openstack service create --name nova --description "OpenStack Compute" compute
создаём API endpoint
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
Настройка nova
cd /etc/nova/ mv nova.conf nova.conf.orig
cat >nova.conf <<EOF [DEFAULT] log_dir = /var/log/nova state_path = /var/lib/nova connection_type = libvirt compute_driver = libvirt.LibvirtDriver image_service = nova.image.glance.GlanceImageService volume_api_class = nova.volume.cinder.API auth_strategy = keystone network_api_class = nova.network.neutronv2.api.API service_neutron_metadata_proxy = True security_group_api = neutron injected_network_template = /usr/share/nova/interfaces.template web=/usr/share/spice-html5 enabled_apis = osapi_compute,metadata rpc_backend = rabbit auth_strategy = keystone my_ip = 10.0.0.11 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [barbican] [cache] [cells] [cinder] [conductor] [cors] [cors.subdomain] [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [ephemeral_storage_encryption] [glance] api_servers = http://controller:9292 [guestfs] [hyperv] [image_file_url] [ironic] [keymgr] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS [libvirt] [matchmaker_redis] [metrics] [neutron] admin_username = neutron admin_password = %SERVICE_PASSWORD% admin_tenant_name = %SERVICE_TENANT_NAME% url = http://localhost:9696 auth_strategy = keystone admin_auth_url = http://localhost:35357/v2.0 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = True metadata_proxy_shared_secret = METADATA_SECRET [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [oslo_middleware] [oslo_policy] [rdp] [serial_console] [spice] spicehtml5proxy_host = :: html5proxy_base_url = https://10.10.3.169:6082/spice_auto.html enabled = True keymap = en-us [ssl] [trusted_computing] [upgrade_levels] [vmware] [vnc] vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [workarounds] [xenserver] EOF
заполнение БД nova
su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage db sync" nova
Запуск nova сервиса
# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service # systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
Установка вычислительной ноды
Ставим пакеты
apt-get update apt-get install openstack-nova-compute libvirt-daemon openstack-neutron-linuxbridge ebtables ipset kernel-modules-ipset-std-def apt-get dist-upgrade
Поменяйте ip 10.0.0.xxx на ip своей вычислительной ноды
cd /etc/nova mv nova.conf nova.conf.orig cat >nova.conf <<EOF [DEFAULT] log_dir = /var/log/nova state_path = /var/lib/nova connection_type = libvirt compute_driver = libvirt.LibvirtDriver image_service = nova.image.glance.GlanceImageService volume_api_class = nova.volume.cinder.API auth_strategy = keystone network_api_class = nova.network.neutronv2.api.API service_neutron_metadata_proxy = True security_group_api = neutron injected_network_template = /usr/share/nova/interfaces.template enabled_apis = osapi_compute,metadata compute_driver = libvirt.LibvirtDriver transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.31 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_database] [barbican] [cache] [cells] [cinder] [conductor] [cors] [cors.subdomain] [database] connection = mysql://nova:nova@localhost/nova [ephemeral_storage_encryption] [glance] api_servers = http://controller:9292 [guestfs] [hyperv] [image_file_url] [ironic] [keymgr] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [libvirt] virt_type = qemu [matchmaker_redis] [metrics] [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS [osapi_v21] [oslo_concurrency] lock_path = /var/run/nova [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_middleware] [oslo_policy] [rdp] [serial_console] [spice] spicehtml5proxy_host = :: html5proxy_base_url = http://controller:6082/spice_auto.html enabled = True agent_enabled = True server_listen = :: server_proxyclient_address = 10.0.0.31 keymap = en-us [ssl] [trusted_computing] [upgrade_levels] [vmware] [vnc] enabled = false [workarounds] [xenserver] EOF
Запуск nova
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
Завершение установки
Проверка на аппаратное ускорение.
egrep -c '(vmx|svm)' /proc/cpuinfo
Если вывод не 0 меняем в файле /etc/nova/nova.conf строчку
virt_type = qemu
на
virt_type = kvm
Проверка установки nova
На управляющем узле, запускаем команды:
# su - admin $ . admin-openrc $ openstack compute service list +----+------------------+-----------+----------+---------+-------+----------------------------+ | Id | Binary | Host | Zone | Status | State | Updated At | +----+------------------+-----------+----------+---------+-------+----------------------------+ | 1 | nova-consoleauth | conroller | internal | enabled | up | 2017-05-18T09:09:12.000000 | | 2 | nova-conductor | conroller | internal | enabled | up | 2017-05-18T09:09:14.000000 | | 3 | nova-scheduler | conroller | internal | enabled | up | 2017-05-18T09:09:19.000000 | | 6 | nova-compute | compute3 | nova | enabled | up | 2017-05-18T09:09:16.000000 | +----+------------------+-----------+----------+---------+-------+----------------------------+
Настройка сетевого сервиса neutron
Настраиваем управляющий узел
mysql -u root -p CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; su - admin . admin-openrc openstack user create --domain default --password NEUTRON_PASS neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696
cd /etc/neutron mv neutron.conf neutron.conf.dist cat >neutron.conf <<EOF [DEFAULT] core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin state_path = /var/lib/neutron log_dir = /var/log/neutron core_plugin = ml2 service_plugins = rpc_backend = rabbit notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True [agent] root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf [cors] [cors.subdomain] [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [keystone_authtoken] signing_dir = /var/cache/neutron/keystone-signing auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = NEUTRON_PASS [matchmaker_redis] [nova] auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [oslo_policy] [qos] [quotas] [ssl] EOF
Настройка Modular Layer 2 (ML2)
cd /etc/neutron/plugins/ml2/ mv ml2_conf.ini ml2_conf.ini.ORIG cat > ml2_conf.ini <<EOF [DEFAULT] [ml2] type_drivers = flat,vlan tenant_network_types = mechanism_drivers = linuxbridge extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_geneve] [ml2_type_gre] [ml2_type_vlan] [ml2_type_vxlan] [securitygroup] enable_ipset = True EOF
cd /etc/neutron/plugins/ml2/ mv linuxbridge_agent.ini linuxbridge_agent.ini.ORIG cat >linuxbridge_agent.ini <<EOF [DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:ens20 [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = False EOF
Настройка DHCP агента
cd /etc/neutron mv dhcp_agent.ini dhcp_agent.ini_ORIG cat >dhcp_agent.ini <<EOF [DEFAULT] dhcp_delete_namespaces = True interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True [AGENT] EOF
Наполнение базы neutron
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"
systemctl restart openstack-nova-api.service systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
Настройка neutron на вычислительном узле
cd /etc/neutron mv neutron.conf neutron.conf_ORIG cat >neutron.conf <<EOF [DEFAULT] rpc_backend = rabbit auth_strategy = keystone [agent] root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf [cors] [cors.subdomain] [database] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = NEUTRON_PASS [matchmaker_redis] [nova] [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_notifications] [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS rabbit_host = controller rabbit_userid = openstack [oslo_policy] [qos] [quotas] [ssl] EOF
cd /etc/neutron/plugins/ml2 mv linuxbridge_agent.ini linuxbridge_agent.ini_ORIG cat >linuxbridge_agent.ini <<EOF [DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:ens20 [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = False EOF
systemctl restart openstack-nova-compute.service systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service
Проверка neutron
На управляющем узле запускаем
su - admin . admin-openrc neutron ext-list
+---------------------------+-----------------------------------------------+ | alias | name | +---------------------------+-----------------------------------------------+ | default-subnetpools | Default Subnetpools | | network-ip-availability | Network IP Availability | | network_availability_zone | Network Availability Zone | | auto-allocated-topology | Auto Allocated Topology Services | | ext-gw-mode | Neutron L3 Configurable external gateway mode | | binding | Port Binding | | agent | agent | | subnet_allocation | Subnet Allocation | | l3_agent_scheduler | L3 Agent Scheduler | | tag | Tag support | | external-net | Neutron external network | | net-mtu | Network MTU | | availability_zone | Availability Zone | | quotas | Quota management support | | l3-ha | HA Router extension | | flavors | Neutron Service Flavors | | provider | Provider Network | | multi-provider | Multi Provider Network | | address-scope | Address scope | | extraroute | Neutron Extra Route | | timestamp_core | Time Stamp Fields addition for core resources | | router | Neutron L3 Router | | extra_dhcp_opt | Neutron Extra DHCP opts | | dns-integration | DNS Integration | | security-group | security-group | | dhcp_agent_scheduler | DHCP Agent Scheduler | | router_availability_zone | Router Availability Zone | | rbac-policies | RBAC Policies | | standard-attr-description | standard-attr-description | | port-security | Port Security | | allowed-address-pairs | Allowed Address Pairs | | dvr | Distributed Virtual Router | +---------------------------+-----------------------------------------------+
neutron agent-list +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+ | 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent | | 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent | | dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | :-) | True | neutron-dhcp-agent | | f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | :-) | True | neutron-metadata-agent | +--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
Настройка web интерфейса
Включаем spice консоль (необходимые настройки уже имеются в конфигах)
systemctl enable openstack-nova-spicehtml5proxy.service systemctl start openstack-nova-spicehtml5proxy.service
cd /etc/openstack-dashboard mv local_settings local_settings_ORIG
cat >local_settings <<EOF # -*- coding: utf-8 -*- import os from django.utils.translation import ugettext_lazy as _ from openstack_dashboard import exceptions from openstack_dashboard.settings import HORIZON_CONFIG DEBUG = False TEMPLATE_DEBUG = DEBUG OPENSTACK_HOST = "controller" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_NEUTRON_NETWORK = { 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, } TIME_ZONE = "Europe/Moscow" WEBROOT = '/dashboard/' LOCAL_PATH = '/tmp' SECRET_KEY='da8b52fb799a5319e747' EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' OPENSTACK_KEYSTONE_BACKEND = { 'name': 'native', 'can_edit_user': True, 'can_edit_group': True, 'can_edit_project': True, 'can_edit_domain': True, 'can_edit_role': True, } OPENSTACK_HYPERVISOR_FEATURES = { 'can_set_mount_point': False, 'can_set_password': False, 'requires_keypair': False, } OPENSTACK_CINDER_FEATURES = { 'enable_backup': False, } OPENSTACK_HEAT_STACK = { 'enable_user_pass': True, } IMAGE_CUSTOM_PROPERTY_TITLES = { "architecture": _("Architecture"), "image_state": _("Euca2ools state"), "project_id": _("Project ID"), "image_type": _("Image Type"), } IMAGE_RESERVED_CUSTOM_PROPERTIES = [] API_RESULT_LIMIT = 1000 API_RESULT_PAGE_SIZE = 20 SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 DROPDOWN_MAX_ITEMS = 30 TIME_ZONE = "UTC" POLICY_FILES_PATH = '/etc/openstack-dashboard' LOGGING = { 'version': 1, # When set to True this will disable all logging except # for loggers specified in this configuration dictionary. Note that # if nothing is specified here and disable_existing_loggers is True, # django.db.backends will still log unless it is disabled explicitly. 'disable_existing_loggers': False, 'handlers': { 'null': { 'level': 'DEBUG', 'class': 'logging.NullHandler', }, 'console': { # Set the level to "DEBUG" for verbose output logging. 'level': 'INFO', 'class': 'logging.StreamHandler', }, }, 'loggers': { # Logging from django.db.backends is VERY verbose, send to null # by default. 'django.db.backends': { 'handlers': ['null'], 'propagate': False, }, 'requests': { 'handlers': ['null'], 'propagate': False, }, 'horizon': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'openstack_dashboard': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'novaclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'cinderclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'keystoneclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'glanceclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'neutronclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'heatclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'ceilometerclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'swiftclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'openstack_auth': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'nose.plugins.manager': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'django': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'iso8601': { 'handlers': ['null'], 'propagate': False, }, 'scss': { 'handlers': ['null'], 'propagate': False, }, }, } SECURITY_GROUP_RULES = { 'all_tcp': { 'name': _('All TCP'), 'ip_protocol': 'tcp', 'from_port': '1', 'to_port': '65535', }, 'all_udp': { 'name': _('All UDP'), 'ip_protocol': 'udp', 'from_port': '1', 'to_port': '65535', }, 'all_icmp': { 'name': _('All ICMP'), 'ip_protocol': 'icmp', 'from_port': '-1', 'to_port': '-1', }, 'ssh': { 'name': 'SSH', 'ip_protocol': 'tcp', 'from_port': '22', 'to_port': '22', }, 'smtp': { 'name': 'SMTP', 'ip_protocol': 'tcp', 'from_port': '25', 'to_port': '25', }, 'dns': { 'name': 'DNS', 'ip_protocol': 'tcp', 'from_port': '53', 'to_port': '53', }, 'http': { 'name': 'HTTP', 'ip_protocol': 'tcp', 'from_port': '80', 'to_port': '80', }, 'pop3': { 'name': 'POP3', 'ip_protocol': 'tcp', 'from_port': '110', 'to_port': '110', }, 'imap': { 'name': 'IMAP', 'ip_protocol': 'tcp', 'from_port': '143', 'to_port': '143', }, 'ldap': { 'name': 'LDAP', 'ip_protocol': 'tcp', 'from_port': '389', 'to_port': '389', }, 'https': { 'name': 'HTTPS', 'ip_protocol': 'tcp', 'from_port': '443', 'to_port': '443', }, 'smtps': { 'name': 'SMTPS', 'ip_protocol': 'tcp', 'from_port': '465', 'to_port': '465', }, 'imaps': { 'name': 'IMAPS', 'ip_protocol': 'tcp', 'from_port': '993', 'to_port': '993', }, 'pop3s': { 'name': 'POP3S', 'ip_protocol': 'tcp', 'from_port': '995', 'to_port': '995', }, 'ms_sql': { 'name': 'MS SQL', 'ip_protocol': 'tcp', 'from_port': '1433', 'to_port': '1433', }, 'mysql': { 'name': 'MYSQL', 'ip_protocol': 'tcp', 'from_port': '3306', 'to_port': '3306', }, 'rdp': { 'name': 'RDP', 'ip_protocol': 'tcp', 'from_port': '3389', 'to_port': '3389', }, } REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES', 'LAUNCH_INSTANCE_DEFAULTS'] EOF
Перезапуск приложений
a2ensite openstack-dashboard systemctl restart httpd2.service memcached.service
Запуск виртуальной машины
Создание сети
su - admin . admin-openrc neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 0e62efcd-8cee-46c7-b163-d8df05c3c5ad | | mtu | 1500 | | name | provider | | port_security_enabled | True | | provider:network_type | flat | | provider:physical_network | provider | | provider:segmentation_id | | | router:external | False | | shared | True | | status | ACTIVE | | subnets | | | tenant_id | d84313397390425c8ed50b2f6e18d092 | +---------------------------+--------------------------------------+
Заменяем данные выделенные жирным на свой пул адресов, шлюз и DNS сервер.
neutron subnet-create --name provider --allocation-pool start=203.0.113.101,end=203.0.113.250 --dns-nameserver 8.8.4.4 --gateway 203.0.113.1 provider 203.0.113.0/24
su - admin . admin-openrc
Создаём новый шаблон оборудования для тестового образа.
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
Многие образы облачных сервисов поддерживают авторизацию через ключи, поэтому создаём ключи и импортируем их.
. demo-openrc ssh-keygen -q -N "" openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
Проверяем импорт:
openstack keypair list +-------+-------------------------------------------------+ | Name | Fingerprint | +-------+-------------------------------------------------+ | mykey | be:68:58:f8:0a:6e:1e:c7:36:1c:8c:ff:c9:30:3f:60 | +-------+-------------------------------------------------+
Создаём групповые политики (group rules)
openstack security group rule create --proto icmp default openstack security group rule create --proto tcp --dst-port 22 default
Проверяем, что образ машины cirros, шаблон m1.nano и группа безопасности default создана.
openstack flavor list openstack image list openstack security group list
Отсюда нужно будет взять ID сети provider
openstack network list +--------------------------------------+--------------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+--------------+--------------------------------------+ | 4716ddfe-6e60-40e7-b2a8-42e57bf3c31c | selfservice | 2112d5eb-f9d6-45fd-906e-7cabd38b7c7c | | b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider | 310911f6-acf0-4a47-824e-3032916582ff | +--------------------------------------+--------------+--------------------------------------+
Создаём виртуальную машины
openstack server create --flavor m1.tiny --image cirros --nic net-id='''PROVIDER_NET_ID''' --security-group default --key-name mykey provider-instance +--------------------------------------+-----------------------------------------------+ | Property | Value | +--------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | hdF4LMQqC5PB | | config_drive | | | created | 2015-09-17T21:58:18Z | | flavor | m1.tiny (1) | | hostId | | | id | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | | image | cirros (38047887-61a7-41ea-9b49-27987d5e8bb9) | | key_name | mykey | | metadata | {} | | name | provider-instance | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c | | updated | 2015-09-17T21:58:18Z | | user_id | 684286a9079845359882afc3aa5011fb | +--------------------------------------+-----------------------------------------------+
Проверяем статус виртуальной машины:
openstack server list +--------------------------------------+-------------------+--------+------------------------+------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+-------------------+--------+------------------------+------------+ | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 | cirros | +--------------------------------------+-------------------+--------+------------------------+------------+
Проверка доступности шлюза.
ping -c 4 203.0.113.1 PING 203.0.113.1 (203.0.113.1) 56(84) bytes of data. 64 bytes from 203.0.113.1: icmp_req=1 ttl=64 time=0.357 ms 64 bytes from 203.0.113.1: icmp_req=2 ttl=64 time=0.473 ms 64 bytes from 203.0.113.1: icmp_req=3 ttl=64 time=0.504 ms 64 bytes from 203.0.113.1: icmp_req=4 ttl=64 time=0.470 ms --- 203.0.113.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2998ms rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
Проверка доступности виртуальной машины.
ping -c 4 203.0.113.103 PING 203.0.113.103 (203.0.113.103) 56(84) bytes of data. 64 bytes from 203.0.113.103: icmp_req=1 ttl=63 time=3.18 ms 64 bytes from 203.0.113.103: icmp_req=2 ttl=63 time=0.981 ms 64 bytes from 203.0.113.103: icmp_req=3 ttl=63 time=1.06 ms 64 bytes from 203.0.113.103: icmp_req=4 ttl=63 time=0.929 ms --- 203.0.113.103 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
Зайдём на виртуальную машину.
ssh cirros@203.0.113.103