Openstack Deployment Document
OpenStack design architecture typically uses the following components:
- OpenStack Identity (keystone) – For Authentication
- OpenStack dashboard (horizon) – Dashboard
- OpenStack Compute (nova) (including the use of multiple hypervisor drivers)
- OpenStack Object Storage (swift) (or another object storage solution)
- OpenStack Block Storage (cinder) – Block Storage
- OpenStack Image service (glance) – Image storage – Template
- OpenStack Networking (neutron) – Networking component
Dashboard Horizon Provides a web-based self-service portal to interact with underlying
OpenStack services, such as launching an instance, assigning IP ad-
dresses and configuring access controls.
Compute Nova Manages the lifecycle of compute instances in an OpenStack environ-
ment. Responsibilities include spawning, scheduling and decommis-
sioning of virtual machines on demand.
Networking Neutron Enables Network-Connectivity-as-a-Service for other OpenStack ser-
vices, such as OpenStack Compute. Provides an API for users to define networks and the attachments into them. Has a pluggable architecture that supports many popular networking vendors and technologies.
Object Storage Swift Stores and retrieves arbitrary unstructured data objects via a RESTful,
HTTP based API. It is highly fault tolerant with its data replication and scale-out architecture. Its implementation is not like a file server with mountable directories. In this case, it writes objects and files to multiple drives, ensuring the data is replicated across a server cluster.
Block Storage Cinder Provides persistent block storage to running instances. Its pluggable
driver architecture facilitates the creation and management of block storage devices.
Identity service Provides an authentication and authorization service for other Open-
Stack services. Provides a catalog of endpoints for all OpenStack services.
Image service Glance Stores and retrieves virtual machine disk images. OpenStack Compute
makes use of this during instance provisioning.
Telemetry Monitors and meters the OpenStack cloud for billing, benchmarking,
scalability, and statistical purposes.
Installation of “CentOS 7.0″:
CentOS 7.0 Installation Process
1. After downloading the last version of CentOS using above links or using officialCentOS download page. Burn it to a DVD or create a bootable USB stick using LiveUSB Creator called Unetbootin.
2. After you have created the installer bootable media, place your DVD/USB into your system appropriate drive, start the computer, select your bootable unit and the first CentOS 7 prompt should appear. At the prompt choose Install CentOS 7 and press [Enter] key.
CentOS 7 Boot Menu
3. The system will start loading media installer and a Welcome screen should appear. Select your Installation Process Language, that will assist you through the entire installation procedure and click on Continue.
CentOS Installer Loading
Select Installation Process Language
4. The next step, present screen prompt is Installation Summary. It contains a lot of options to fully customize your system. First thing you may want to setup is your time settings. Click on Date & Time and select your server physical location from the provided map and hit on upper Done button to apply configuration.
Select Date & Time and Location
5. The next step is to choose your Language Support and Keyboard settings. Choose your main and extra language for your system and when you’re finished hit on Done button.
Select Language and Keyboard
Select English Language
6. The same way choose your Keyboard Layout by hitting the plus button and test your keyboard configuration using the right input filed. After you finish setting up your keyboard, again hit on upper Done button to apply changes and go back to main screen on Installation Summary.
Choose Keyboard Layout
Choose English Keyboard
7. On the next step you can customize your installation by using other Installation Sources than your local DVD/USB media, such as a network locations using HTTP, HTTPS, FTP or NFS protocols and even add some additional repositories, but use this methods only if you know what you’re doing. So leave the default Auto-detected installation media and hit on Done to continue.
Choose Installation Sources
Auto Detect Installation Type
8. On the next step you can choose your system installation software. On this step CentOS offers a lot of Server and Desktop platform environments that you choose from, but, if you want a high degree of customization, especially if you are going to use CentOS 7 to run as a server platform, then I suggest you select Minimal Install with Compatibility Libraries as Add-ons, which will install a minimal basic system software and later you can add other packages as your needs require using yum groupinstall command.
Software Selection
Select CentOS 7 Minimal Install
9. Now it’s time to partition your hard-drive. Click on Installation Destination menu, select your disk and choose I will configure partitioning.
Choose Installation Destination
Installation Device Selection
10. On the next screen, choose LVM (Logical Volume Manager) as partition layout and, then, click on Click here to create them automatically, option which will create three system partition using XFS filesystem, automatically redistributing your hard-disk space and gathering all LVS into one big Volume Group named centos.
- /boot – Non LVM
- /(root) – LVM
- Swap – LVM
Select LVM Partition Type
Create Partitions
11. If you are not pleased with the default partition layout done automatically by the installer you can completely add, modify or resize your partition scheme and when you finish hit on Done button and Accept Changes on the Summary of Changes prompt.
Summary of Partition Changes
NOTE: For those users, who have hard-disks more than 2TB in size, the installer automatically will convert partition table to GPT, but if you wish to use GPT table on smaller disks than 2TB, then you should use the argument inst.gpt to the installer boot command line in order to change the default behaviour.
12. The next step is to set your system hostname and enable networking. Click on Network & Hostname label and type your system FQDN (Fully Qualified Domain Name) on Hostname filed, then enable your Network interface, switching the top Ethernet button to ON.
If you have a functional DHCP server on you network then it will automatically configure all your network setting for enabled NIC, which should appear under your active interface.
Set System Hostname
Enable Ethernet Interface
13. If your system will be destined as a server it’s better to set static network configuration on Ethernet NIC by clicking on Configure button and add all your static interface settings like in the screenshot below, and when you’re finished hit on Save button, disable and enable Ethernet card by switching the button to OFF and ON, and, then hit on Done to apply setting and go back to main menu.
Enter Network Settings
14. Now it’s time to start installation process by pressing on Begin Installation button and set up a strong password for root account.
Click on Begin Installation
Select Root Password
Enter Root Password
15. After you finish setting up a strong password for root account move to User Creation and create your first system user. You can designate this user to become a System Admin with root privileges using sudo command by checking the box Make this user administrator, then click on Done to go back on main menu and wait for the installation process to finish.
CentOS 7 Installation Process
User Creation and Set Password
16. After the installation process finishes, the installer will show a successfully message on screen, demanding to reboot your system in order to use it.
CentOS 7 Installation Complete
Configure Network Interfaces
Management Nic
/etc/sysconfig/network-scripts/ifcfg-em1
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
NM_CONTROLLED=no
NAME=em1
UUID=260f5c08-6c36-4883-9f9f-2dfe89a44608
DEVICE=em1
ONBOOT=yes
DNS1=10.10.100.3
IPADDR=192.168.70.2
PREFIX=24
GATEWAY=192.168.70.254
Note: In NAME & DEVICE text “em1” here needs to be replaced with actual interface name” for eg., for available interface ifcfg-enp1s0 enter “enp1s0”
External NIC
vim /etc/sysconfig/network-scripts/ifcfg-em3
DEVICE=em3
ONBOOT=yes
NM_CONTROLLED=no
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
Note: In NAME & DEVICE text “em3” here needs to be replaced with actual interface name” for eg., for available interface ifcfg-enp3s0 enter “enp3s0”
External Bridge
vim /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
OVSBOOTPROTO=static
DNS1=10.10.100.3
IPADDR=192.168.71.2
NETMASK=255.255.255.0
MACADDR=44:A8:42:1E:84:AD
OVS_EXTRA=”set bridge $DEVICE other-config:hwaddr=$MACADDR”
NM_CONTROLLED=no
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=ovs
Internal Nic
vim /etc/sysconfig/network-scripts/ifcfg-em4
TYPE=Ethernet
BOOTPROTO=no
DEFROUTE=yes
#IPV4_FAILURE_FATAL=no
#IPV6INIT=yes
#IPV6_AUTOCONF=yes
#IPV6_DEFROUTE=yes
#IPV6_FAILURE_FATAL=no
NAME=em4
UUID=eabc7670-cca4-4a8a-a874-0da675bf24d0
DEVICE=em4
ONBOOT=yes
NM_CONTROLLED=no
IPADDR=192.168.72.2
PREFIX=24
Stop NetworkManager service and disable it on boot.
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# systemctl status NetworkManager
Verify network connectivity to the Internet and among the nodes before proceeding further.
Configure NTP
You must install NTP to properly synchronize time among nodes. To configure NTP follow below steps:
# yum -y install ntp
# ntpdate -u 0.centos.pool.ntp.org
# systemctl enable ntpd
# systemctl start ntpd
# systemctl status ntpd
# ntpdate -u 0.centos.pool.ntp.org
# systemctl enable ntpd
# systemctl start ntpd
# systemctl status ntpd
Installing Openstack Repo
# yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
# yum install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm
# yum upgrade
# shutdown -r 0
# yum install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm
# yum upgrade
# shutdown -r 0
CentOS enable SELinux by default. Install the openstack-selinux package to
automatically manage security policies for OpenStack services
# yum install openstack-selinux
Installing Database
OpenStack services use an SQL database to store information
To install and configure the database server
# yum install mariadb mariadb-server MySQL-python
Open the file my.cnf
#vim /etc/my.cnf
Edit and add the below bolded entries
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = ‘SET NAMES utf8′
character-set-server = utf8
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
Edit file and update host entry
#vim /etc/hosts
Add the below bolded entry, replace its IP/Hostname as required
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.12.12 openstacklab.payoda.com openstacklab
Start the database service and configure it to start when the system boots:
# systemctl enable mariadb.service
# systemctl start mariadb.service
Installing Message queue
# yum install rabbitmq-server
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
Add the openstack user:
# rabbitmqctl add_user openstack Payoda2015
Creating user “openstack” …
…done.
Permit configuration, write, and read access for the openstack user:
# rabbitmqctl set_permissions openstack “.*” “.*” “.*”
Setting permissions for user “openstack” in vhost “/” …
…done.
Add the Identity service
Before you configure the OpenStack Identity service, you must create a database and an
administration token.
#mysqladmin -u root password ‘Payoda2015’
#mysql -u root -p
password:Payoda2015
Create the keystone database:
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@'localhost’ IDENTIFIED BY ‘Payoda2015′;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@'%’ IDENTIFIED BY ‘Payoda2015′;
exit
To install and configure the Identity service components
# yum install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached
# systemctl enable memcached.service
# systemctl start memcached.service
–left here—
vim /etc/keystone/keystone.conf
[root@avxstack01 ~]#
[root@avxstack01 ~]# grep “^[^#;]” /etc/keystone/keystone.conf
[DEFAULT]
verbose = True
debug = True
admin_token = ADMIN
[assignment]
[auth]
[cache]
[catalog]
[credential]
[database]
connection = mysql://keystone:Payoda2015@localhost/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[matchmaker_ring]
[memcache]
servers = localhost:11211
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
[role]
[saml]
[signing]
[ssl]
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
[trust]
Populate the Identity service database:
# su -s /bin/sh -c “keystone-manage db_sync” keystone
To configure the Apache HTTP server
Edit the /etc/httpd/conf/httpd.conf file and configure the ServerName op-
tion to reference the controller node
vim /etc/httpd/conf/httpd.conf
ServerName avxstack01.payoda.com:80
Create the /etc/httpd/conf.d/wsgi-keystone.conf file with the following content:
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /var/www/cgi-bin/keystone/main
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LogLevel info
ErrorLogFormat “%{cu}t %M”
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /var/www/cgi-bin/keystone/admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LogLevel info
ErrorLogFormat “%{cu}t %M”
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
</VirtualHost>
Create the directory structure for the WSGI components:
mkdir -p /var/www/cgi-bin/keystone
Copy the WSGI components from the upstream repository into this directory:
curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo
| tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin
Adjust ownership and permissions on this directory and the files in it:
# chown -R keystone:keystone /var/www/cgi-bin/keystone
# chmod 755 /var/www/cgi-bin/keystone/*
To finalize installation
Restart the Apache HTTP server:
# systemctl enable httpd.service
# systemctl start httpd.service // failed
Create the service entity and API endpoint
export OS_TOKEN=ADMIN
export OS_URL=http://192.168.70.2:35357/v2.0
To create the service entity and API endpoint
Create the service entity for the Identity service:
# openstack service create –name keystone –description “OpenStack Identity” identity
# openstack endpoint create –publicurl http://192.168.70.2:5000/v2.0 –internalurl http://192.168.70.2:5000/v2.0 –adminurl http://192.168.70.2:35357/v2.0 –region RegionOne
identity
Create projects, users, and roles
# openstack project create –description “Admin Project” admin
# openstack user create –password-prompt admin
User Password: ADMIN
# openstack role create admin
$ openstack role add –project admin –user admin admin
$ openstack project create –description “Service Project” service
$ openstack project create –description “Demo Project” demo
$ openstack user create –password-prompt demo
$ openstack role create user
$ openstack role add –project demo –user demo user
Unset the temporary OS_TOKEN and OS_URL environment variables:
unset OS_TOKEN
unset OS_URL
As the admin user, request an authentication token from the Identity version 2.0 API:
$ openstack –os-auth-url http://192.168.70.2:35357 –os-project-name admin –os-username admin –os-auth-type password token issue
$ openstack –os-auth-url http://192.168.70.2:35357 –os-project-domain-id default –os-user-domain-id default –os-project-name admin –os-username admin –os-auth-type password token issue
This command uses the password for the admin user.
As the admin user, list projects to verify that the admin user can execute admin-only
CLI commands
$ openstack –os-auth-url http://192.168.70.2:35357 –os-project-name admin –os-username admin –os-auth-type password project list
Provide Admin Password.
As the admin user, list users to verify that the Identity service contains the users that
you created
$ openstack –os-auth-url http://192.168.70.2:35357 –os-project-name admin –os-username admin –os-auth-type password user list
As the admin user, list roles to verify that the Identity service contains the role that
you created
$ openstack –os-auth-url http://192.168.70.2:35357 –os-project-name admin –os-username admin –os-auth-type password role list
Password:
As the demo user, request an authentication token from the Identity version 3 API:
$ openstack –os-auth-url http://192.168.70.2:5000 –os-project-domain-id default –os-user-domain-id default –os-project-name demo –os-username demo –os-auth-type password token issue
Password:
$ openstack –os-auth-url http://192.168.70.2:5000 –os-project-domain-id default –os-user-domain-id default –os-project-name demo –os-username demo –os-auth-type password user list
Note: you will get failed, which is okay, with error displayed “As the demo user, attempt to list users to verify that it cannot execute admin-only CLI commands”
Create OpenStack client environment scripts
Create client environment scripts for the admin and demo projects and users. Future por-
tions of this guide reference these scripts to load appropriate credentials for client opera-
tions.
# vim admin-openrc.sh file and add the following content:
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN
export OS_AUTH_URL=http://192.168.70.2:35357/v3
# vim demo-openrc.sh file and add the following content:
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=Payoda2015
export OS_AUTH_URL=http://192.168.70.2:5000/v3
To load client environment scripts
$ source admin-openrc.sh
$ openstack token issue
Add the Image service
To create the database, complete these steps:
mysql -u root -p
Create the glance database:
> CREATE DATABASE glance;
Grant proper access to the glance database:
GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@'localhost’ IDENTIFIED BY ‘Payoda2015′;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@'%’ IDENTIFIED BY ‘Payoda2015′;
source admin-openrc.sh
To create the service credentials, complete these steps:
Create the glance user:
$ openstack user create –password-prompt glance
User Password:
Repeat User Password:
Add the admin role to the glance user and service project:
$ openstack role add –project service –user glance admin
Create the glance service entity:
$ openstack service create –name glance –description “OpenStack Image service” image
Create the Image service API endpoint:
$ openstack endpoint create –publicurl http://192.168.70.2:9292 –internalurl http://192.168.70.2:9292
–adminurl http://192.168.70.2:9292 –region RegionOne image
To install and configure the Image service components
Install the packages:
# yum install openstack-glance python-glance python-glanceclient
Edit the /etc/glance/glance-api.conf file and complete the following actions:
[DEFAULT]
verbose=True
debug=True
notification_driver = noop
[oslo_policy]
[database]
connection = mysql://glance:Payoda2015@192.168.70.2/glance
[oslo_concurrency]
[keystone_authtoken]
auth_uri = http://192.168.70.2:5000
auth_url = http://192.168.70.2:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = Payoda2015
[paste_deploy]
flavor = keystone
[store_type_location_strategy]
[profiler]
[task]
[taskflow_executor]
[glance_store]
default_store=file
filesystem_store_datadir=/var/lib/glance/images/
vim /etc/glance/glance-registry.conf
[DEFAULT]
verbose=True
debug=True
notification_driver = messagingv2
rpc_backend = rabbit
rabbit_host = 192.168.70.2
rabbit_userid = openstack
rabbit_password = Payoda2015
log_file=/storage/glance/log/registry.log
notification_driver = noop
[oslo_policy]
[database]
connection=mysql://glance:Payoda2015@192.168.70.2/glance
[keystone_authtoken]
auth_uri = http://192.168.70.2:5000
auth_url = http://192.168.70.2:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = Payoda2015
[paste_deploy]
flavor = keystone
[profiler]
Populate the Image service database:
# su -s /bin/sh -c “glance-manage db_sync” glance
To finalize installation
# systemctl enable openstack-glance-api.service openstack-glance-registry.service
# systemctl start openstack-glance-api.service openstack-glance-registry.service
To Verify
In each client environment script, configure the Image service client to use API version
2.0:
$ echo “export OS_IMAGE_API_VERSION=2″ | tee -a admin-openrc.sh demo-openrc.sh
Source the admin credentials to gain access to admin-only CLI commands:
$ source admin-openrc.sh
Create a temporary local directory:
$ mkdir /tmp/images
Download the source image into it:
$ wget -P /tmp/images http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Upload the image to the Image service using the QCOW2 disk format, bare container format, and public visibility so all projects can access it:
$ glance image-create –name “cirros-0.3.4-x86_64″ –file /tmp/images/cirros-0.3.4-x86_64-disk.img –disk-format qcow2 –container-format bare –visibility public –progress
Confirm upload of the image and validate attributes:
$ glance image-list
Remove the temporary local directory and source image:
$ rm -r /tmp/images
Add the Compute service
Use the database access client to connect to the database server as the root user:
$ mysql -u root -p
> CREATE DATABASE nova;
> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@'localhost’ IDENTIFIED BY ‘Payoda2015′;
> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@'%’ IDENTIFIED BY ‘Payoda2015′;
> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@'avxstack01.payoda.com’ IDENTIFIED BY ‘Payoda2015′;
exit
To create the service credentials, complete these steps:
Create the nova user:
$ openstack user create –password-prompt nova
User Password:
Repeat User Password:
Add the admin role to the nova user:
$ openstack role add –project service –user nova admin
Create the nova service entity:
$ openstack service create –name nova –description “OpenStack Compute” compute
Create the Compute service API endpoint:
$ openstack endpoint create –publicurl http://192.168.70.2:8774/v2/%\(tenant_id\)s –internalurl http://192.168.70.2:8774/v2/%\(tenant_id\)s –adminurl http://192.168.70.2:8774/v2/%\(tenant_id\)s
–region RegionOne compute
Install the packages:
# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
–
Edit the /etc/nova/nova.conf file and complete the following actions:
vim /etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.70.2
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.70.2
novncproxy_base_url = http://192.168.70.2:6080/vnc_auto.html
verbose = True
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
log_dir=/var/log/nova
[api_database]
[barbican]
[cells]
[cinder]
[conductor]
[database]
connection = mysql://nova:Payoda2015@192.168.70.2/nova
[ephemeral_storage_encryption]
[glance]
host = 192.168.70.2
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://192.168.70.2:5000
auth_url = http://192.168.70.2:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = Payoda2015
[libvirt]
[metrics]
[neutron]
url = http://192.168.70.2:9696
auth_strategy = keystone
admin_auth_url = http://192.168.70.2:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = Payoda2015
service_metadata_proxy=True
metadata_proxy_shared_secret=Payoda2015
[osapi_v3]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[workarounds]
[xenserver]
[zookeeper]
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 192.168.70.2
rabbit_userid = openstack
rabbit_password = Payoda2015
Populate the Compute database:
# su -s /bin/sh -c “nova-manage db sync” nova
Start the Compute services and configure them to start when the system boots:
# systemctl enable openstack-nova-api.service openstack-nova-cert.service
openstack-nova-consoleauth.service openstack-nova-scheduler.service
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service openstack-nova-cert.service
openstack-nova-consoleauth.service openstack-nova-scheduler.service
openstack-nova-conductor.service openstack-nova-novncproxy.service
Install the packages:
# yum install openstack-nova-compute sysfsutils
Edit the /etc/nova/nova.conf file and complete the following actions:
?
To finalize installation
Determine whether your compute node supports hardware acceleration for virtual ma-
chines:
$ egrep -c ‘(vmx|svm)’ /proc/cpuinfo
If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
Start the Compute service including its dependencies and configure them to start auto-
matically when the system boots:
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
Source the admin credentials to gain access to admin-only CLI commands:
$ source admin-openrc.sh
ist service components to verify successful launch and registration of each process:
$ nova service-list
List API endpoints in the Identity service to verify connectivity with the Identity service:
$ nova endpoints
Add a networking component
To create the database, complete these steps:
$ mysql -u root -p
Create the neutron database:
CREATE DATABASE neutron;
Grant proper access to the neutron database:
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@'localhost’ IDENTIFIED BY ‘Payoda2015′;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@'%’ IDENTIFIED BY ‘Payoda2015′;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@'openstacklab.payoda.com’ IDENTIFIED BY ‘Payoda2015′;
Source the admin credentials to gain access to admin-only CLI commands:
$ source admin-openrc.sh
To create the service credentials, complete these steps:
Create the neutron user:
$ openstack user create –password-prompt neutron
User Password:
Repeat User Password:
Add the admin role to the neutron user:
$ openstack role add –project service –user neutron admin
$ openstack service create –name neutron –description “OpenStack Networking” network
Create the Networking service API endpoint:
$ openstack endpoint create –publicurl http://192.168.70.2:9696 –adminurl http://192.168.70.2:9696 –internalurl http://192.168.70.2:9696 –region RegionOne network
Install the Networking components
# yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which
Edit the /etc/neutron/neutron.conf file and complete the following actions:
# vim /etc/neutron/neutron.conf
[DEFAULT]
verbose = True
debug = True
rpc_backend = rabbit
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://192.168.70.2:8774/v2
[nova]
auth_url = http://192.168.70.2:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = Payoda2015
[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://192.168.70.2:5000
auth_url = http://192.168.70.2:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = Payoda2015
[database]
connection = mysql://neutron:Payoda2015@192.168.70.2/neutron
[oslo_concurrency]
lock_path = $state_path/lock
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = 192.168.70.2
rabbit_userid = openstack
rabbit_password = Payoda2015
To configure the Modular Layer 2 (ML2) plug-in
# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 300:400
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 192.168.70.2
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre
To finalize installation
1.
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Populate the database:
# su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf
–config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
Restart the Compute services:
# systemctl restart openstack-nova-api.service openstack-nova-scheduler.
service openstack-nova-conductor.service
Start the Networking service and configure it to start when the system boots:
# systemctl enable neutron-server.service
# systemctl start neutron-server.service
Before you install and configure OpenStack Networking, you must configure certain kernel
networking parameters.
1.
Edit the /etc/sysctl.conf file to contain the following parameters:
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
2.
Implement the changes:
# sysctl -p
To install the Networking components
•
# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-
openvswitch
To configure the Layer-3 (L3) agent
Edit the /etc/neutron/l3_agent.ini file and complete the following actions:
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True
verbose = True
Verify operation
Source the admin credentials to gain access to admin-only CLI commands:
$ source admin-openrc.sh
List loaded extensions to verify successful launch of the neutron-server process:
$ neutron ext-list
Edit the /etc/neutron/dhcp_agent.ini file and complete the following actions:
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_delete_namespaces = True
verbose = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
Edit the /etc/neutron/metadata_agent.ini file and complete the following actions:
[DEFAULT]
auth_uri = http://192.168.70.2:5000
auth_url = http://192.168.70.2:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = Payoda2015
nova_metadata_ip = 192.168.70.2
metadata_proxy_shared_secret = Payoda2015
verbose = True
Start the OVS service and configure it to start when the system boots:
# systemctl enable openvswitch.service
# systemctl start openvswitch.service
Add the external bridge:
# ovs-vsctl add-br br-int
# ovs-vsctl add-br br-ex
# ovs-vsctl add-br br-ex
Verify Configuration:
# ovs-vsctl show
# ovs-vsctl add-ports br-ex em3 //provided with second interface
# ovs-vsctl add-ports br-int em4 //ignored
To finalize the installation
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
# sed -i ‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g’ /usr/lib/systemd/system/neutron-openvswitch-agent.service
Start the Networking services and configure them to start when the system boots:
# systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.
service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
# systemctl start neutron-openvswitch-agent.service neutron-l3-agent.
service neutron-dhcp-agent.service neutron-metadata-agent.service
Verify operation
Source the admin credentials to gain access to admin-only CLI commands:
$ source admin-openrc.sh
List agents to verify successful launch of the neutron agents:
$ neutron agent-list
Edit the /etc/sysctl.conf file to contain the following parameters:
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
Implement the changes:
# sysctl -p
External network
After installation and configuration of all components was done we created initial networks to launch instance in Openstack environment.
Create external network, this would be done as administrator:
source admin-openrc.sh
Now we create the external provider network as shown below:
neutron net-create ext-net –router:external –provider:physical_network external –provider:network_type flat
Create subnet for external network
neutron subnet-create ext-net 192.168.70.0/24 –name ext-subnet –allocation-pool start=192.168.70.5,end=192.168.70.250 –disable-dhcp –gateway 192.168.70.254
Note: update the start and end with available IPs on the network and update accordingly
Now we will create the tenant network and a subnet within it. We will first source the credentials of the demo tenant:
source demo-openrc.sh
Now we will create a demo network within this tenant and create a subnet within it as shown below:
neutron net-create demo-net
neutron subnet-create demo-net 192.168.72.0/24 –name demo-subnet –gateway 192.168.70.1
neutron subnet-create demo-net 192.168.72.0/24 –name demo-subnet –gateway 192.168.70.1
Finally we will create a router:
neutron router-create demo-router
And attach the router to the external network as well as the demo-subnet within the tenant network:
neutron router-interface-add demo-router demo-subnet
neutron router-gateway-set demo-router ext-net
neutron router-gateway-set demo-router ext-net
Launch Instance
Before launching instance, create keypair to access it via ssh and add port 22 to default security group otherwise it will not be accessible.
# Generate key pair to access instance
nova keypair-add demo-key
nova keypair-list
nova keypair-add demo-key
nova keypair-list
# Get network id to launch instance
neutron net-list
neutron net-list
# Add rules to security group to access instance
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
Launch Instance
nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=ab3201d5-ba9f-4210-a61d-0777f5a0e48b –security-group default –key-name demo-key demo-instance1
# Verify status of instance
nova list
nova list
# Create floatin ip for external network
neutron floatingip-create ext-net
neutron floatingip-create ext-net
# Attach floating ip to instance
nova floating-ip-associate demo-instance1 192.168.71.10
nova floating-ip-associate demo-instance1 192.168.71.10
# Verify floating-ip
nova list
nova list
# Access instance using ssh key
chmod 600 demo-key.pem
ssh -i demo-key.pem cirros@192.168.71.10
IPtable
/sbin/iptables -I RH-Firewall-1-INPUT -p tcp -m tcp –dport 80 -j ACCEPT
sudo /sbin/iptables -A INPUT -p tcp –dport http -j ACCEPT
Vignesh:continued from here., skipping above
To install the dashboard components
- Install the packages:
- # yum install openstack-dashboard httpd mod_wsgi memcached python-memcached
To configure the dashboard
To install the dashboard components
Install the packages:
# yum install openstack-dashboard httpd mod_wsgi memcached python-memcached
Edit the /etc/openstack-dashboard/local_settings file and complete the
following actions:
Configure the dashboard to use OpenStack services on the controller node:
OPENSTACK_HOST = “controller”
Allow all hosts to access the dashboard:
ALLOWED_HOSTS = ‘*’
Configure the memcached session storage service:
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.
MemcachedCache’,
‘LOCATION’: ’127.0.0.1:11211′,
}
}
Note: update “controller” with the management IP address
Configure user as the default role for users that you create via the dashboard:
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
On RHEL and CentOS, configure SELinux to permit the web server to connect to Open-
Stack services:
# setsebool -P httpd_can_network_connect on
Vignesh:
[root@openstacklab ~]# setsebool -P httpd_can_network_connect on
Full path required for exclude: net:[4026532316].
Full path required for exclude: net:[4026532316].
Full path required for exclude: net:[4026532376].
Full path required for exclude: net:[4026532376].
Due to a packaging bug, the dashboard CSS fails to load properly. Run the following
command to resolve this issue:
# chown -R apache:apache /usr/share/openstack-dashboard/static
Start the web server and session storage service and configure them to start when the
system boots:
# systemctl enable httpd.service memcached.service
# systemctl start httpd.service memcached.service
Verify operation
This section describes how to verify operation of the dashboard.
1. Access the dashboard using a web browser: http://controller/dashboard .
2. Authenticate using admin or demo user credentials
Changing path:
keystone.conf
log_dir = /storage/keystone/log
/etc/glance/glance-api.conf
log_file=/storage/glance/log/api.log
filesystem_store_datadir=/storage/glance/images/
glance/glance-registry.conf
log_file=/storage/glance/log/registry.log
nova.conf
changin image location
state_path=/storage/nova/var/lib/nova
log_dir=/storage/nova/var/log
/etc/neutron/
log_dir=/storage/neutron/log
restart the services at end
Reference Document Links:
Commands used while installation
1 vim /etc/hostname
2 vim /etc/sysconfig/network-scripts/ifcfg-em1
3 yum install net-tools
4 ps -ef | grep yum
5 kill -9 13152
6 yum install net-tools
7 ps -ef | grep yum
8 kill -9 11622
9 yum install net-tools
10 ps -ef | grep yum
11 yum install net-tools
12 ifconfig
13 vim /etc/hosts
14 yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
15 yum install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm
16 yum upgrade
17 vim /etc/sysconfig/network-scripts/ifcfg-em1
18 vim /etc/sysconfig/network-scripts/ifcfg-em3
19 vim /etc/sysconfig/network-scripts/ifcfg-em4
20 systemctl disable NetworkManager.service
21 yum install openstack-selinux
22 yum -y install ntp
23 ntpdate -u 0.centos.pool.ntp.org
24 systemctl enable ntpd
25 systemctl start ntpd
26 systemctl status ntpd
27 systemctl status NetworkManager
28 reboot
29 traceroute google.com
30 ping google.com
31 fdisk -l
32 mkdir /storage
33 chmod -R 777 /storage/
34 vim /etc/fstab
35 mount -a
36 df h-h
37 df -h
38 cd /storage/
39 mkdir glance
40 cd glance/
41 tail -f /var/log/nova/nova-compute.log
42 tail -f /var/log/nova/nova-manage.log
43 tail -f /var/log/nova/nova-scheduler.log
44 tail -f /var/log/nova/*.log
45 tail -f /var/log/neutron/*.log
46 tail -f /var/log/libvirt/qemu/*
47 vim /etc/selinux/config
48 reboot
49 tcpdump -i em1
50 tcpdump -i em1 port 80
51 telnet 192.168.70.2 80
52 ping 192.168.180.1
53 yum install mariadb mariadb-server MySQL-python
54 cp /etc/my.cnf /etc/my.cnf.Orig
55 vim /etc/my.cnf
56 systemctl enable mariadb.service
57 systemctl start mariadb.service
58 mysql_secure_installation
59 yum install rabbitmq-server
60 systemctl enable rabbitmq-server.service
61 systemctl start rabbitmq-server.service
62 rabbitmqctl add_user openstack Payoda2015
63 rabbitmqctl set_permissions openstack “.*” “.*” “.*”
64 mysql -u root -pPayoda2015
65 yum install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached
66 systemctl enable memcached.service
67 systemctl start memcached.service
68 vim /etc/keystone/keystone.conf
69 cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak.Orig07082015
70 vim /etc/keystone/keystone.conf
71 grep -v “^#” /etc/keystone/keystone.conf
72 su -s /bin/sh -c “keystone-manage db_sync” keystone
73 cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.Orig.Bk.07082015
74 vim /etc/httpd/conf/httpd.conf
75 cp /etc/httpd/conf.d/wsgi-keystone.conf /etc/httpd/conf.d/wsgi-keystone.conf.Orig.Bk.07082915
76 vim /etc/httpd/conf.d/wsgi-keystone.conf
77 mkdir -p /var/www/cgi-bin/keystone
78 curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin
79 chown -R keystone:keystone /var/www/cgi-bin/keystone
80 chmod 755 /var/www/cgi-bin/keystone/*
81 systemctl enable httpd.service
82 systemctl start httpd.service
83 export OS_TOKEN=ADMIN
84 export OS_URL=http://192.168.70.2:35357/v2.0
85 openstack service create –name keystone –description “OpenStack Identity” identity
86 openstack endpoint create –publicurl http://192.168.70.2:5000/v2.0 –internalurl http://192.168.70.2:5000/v2.0 –adminurl http://192.168.70.2:35357/v2.0 –region RegionOne identity
87 openstack project create –description “Admin Project” admin
88 openstack project create –description “Service Project” service
89 openstack user create –password-prompt admin
90 openstack role create admin
91 openstack role add –project admin –user admin admin
92 openstack project create –description “Demo Project” demo
93 openstack user create –password-prompt demo
94 openstack role create user
95 openstack role add –project demo –user demo user
96 unset OS_TOKEN OS_URL
97 openstack –os-auth-url http://192.168.70.2:35357 –os-project-name admin –os-username admin –os-auth-type password token issue
98 openstack –os-auth-url http://192.168.70.2:35357 –os-project-domain-id default –os-user-domain-id default –os-project-name admin –os-username admin –os-auth-type password token issue
99 openstack –os-auth-url http://192.168.70.2:35357 –os-project-name admin –os-username admin –os-auth-type password project list
100 openstack –os-auth-url http://192.168.70.2:35357 –os-project-name admin –os-username admin –os-auth-type password user list
101 openstack –os-auth-url http://192.168.70.2:35357 –os-project-name admin –os-username admin –os-auth-type password role list
102 openstack –os-auth-url http://192.168.70.2:5000 –os-project-domain-id default –os-user-domain-id default –os-project-name demo –os-username demo –os-auth-type password token issue
103 openstack –os-auth-url http://192.168.70.2:5000 –os-project-domain-id default –os-user-domain-id default –os-project-name demo –os-username demo –os-auth-type password user list
104 vim admin-openrc.sh
105 openstack user list
106 source admin-openrc.sh
107 openstack user list
108 vim demo-openrc.sh
109 openstack token issue
110 mysql -u root -pPayoda2015
111 source admin-openrc.sh
112 openstack user create –password-prompt glance
113 openstack role add –project service –user glance admin
114 openstack service create –name glance –description “OpenStack Image service” image
115 openstack endpoint create –publicurl http://192.168.70.2:9292 –internalurl http://192.168.70.2:9292 –adminurl http://192.168.70.2:9292 –region RegionOne image
116 yum install openstack-glance python-glance python-glanceclient
117 cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.Orig.Bk.07082015
118 vim /etc/glance/glance-api.conf
119 cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.Orig.Bk.07082915
120 vim /etc/glance/glance-registry.conf
121 su -s /bin/sh -c “glance-manage db_sync” glance
122 systemctl enable openstack-glance-api.service openstack-glance-registry.service
123 systemctl start openstack-glance-api.service openstack-glance-registry.service
124 echo “export OS_IMAGE_API_VERSION=2″ | tee -a admin-openrc.sh demo-openrc.sh
125 source admin-openrc.sh
126 mkdir ~/images
127 cd ~/images
128 curl -O http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
129 glance image-create –name “cirros-0.3.4-x86_64″ –file cirros-0.3.4-x86_64-disk.img –disk-format qcow2 –container-format bare –visibility public –progress
130 glance image-list
131 rm -r /tmp/images
132 rm -rf ~/images
133 glance image-list
134 mkdir ~/images
135 cd ~/images
136 curl -O http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
137 glance image-list
138 cd
139 mysql -u root -pPayoda2015
140 source admin-openrc.sh
141 openstack user create –password-prompt nova
142 openstack role add –project service –user nova admin
143 openstack service create –name nova –description “OpenStack Compute” compute
144 openstack endpoint create –publicurl http://192.168.70.2:8774/v2/%\(tenant_id\)s –internalurl http://192.168.70.2:8774/v2/%\(tenant_id\)s –adminurl http://192.168.70.2:8774/v2/%\(tenant_id\)s –region RegionOne compute
145 yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
146 cp /etc/nova/nova.conf /etc/nova/nova.conf.Orig.BK.07082015
147 vim /etc/nova/nova.conf
148 vim /etc/nova/nova.conf
149 vim /etc/nova/nova.conf
150 su -s /bin/sh -c “nova-manage db sync” nova
151 systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
152 systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
153 yum install openstack-nova-compute sysfsutils
154 vim /etc/nova/nova.conf
155 su -s /bin/sh -c “nova-manage db sync” nova
156 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
157 egrep -c ‘(vmx|svm)’ /proc/cpuinfo
158 vim /etc/nova/nova.conf
159 systemctl enable libvirtd.service openstack-nova-compute.service
160 systemctl start libvirtd.service openstack-nova-compute.service
161 source admin-openrc.sh
162 nova service-list
163 nova endpoints
164 nova image-list
165 mysql -u root -pPayoda2015
166 source admin-openrc.sh
167 openstack user create –password-prompt neutron
168 openstack role add –project service –user neutron admin
169 openstack service create –name neutron –description “OpenStack Networking” network
170 openstack endpoint create –publicurl http://192.168.70.2:9696 –adminurl http://192.168.70.2:9696 –internalurl http://192.168.70.2:9696 –region RegionOne network
171 yum install openstack-neutron openstack-neutron-ml2 python-neutronclient
172 cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.Orig.Bak.07082015
173 vim /etc/neutron/neutron.conf
174 cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.Orig.BK.07082015
175 vim /etc/neutron/plugins/ml2/ml2_conf.ini
176 nova list
177 vim /etc/nova/nova.conf
178 nova image-list
179 nova endpoints
180 vim /etc/nova/nova.conf
181 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
182 su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
183 systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service
184 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
185 systemctl enable neutron-server.service
186 systemctl start neutron-server.service
187 source admin-openrc.sh
188 neutron ext-list
189 vim /etc/sysctl.conf
190 sysctl -p
191 yum install openstack-neutron openstack-neutron-ml2 openstack-neutronopenvswitch
192 vim /etc/neutron/neutron.conf
193 vim /etc/neu
194 vim /etc/neutron/neutron.conf
195 vim /etc/neutron/plugins/ml2/ml2_conf.ini
196 cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.Orig.Bak.07082015
197 vim /etc/neutron/l3_agent.ini
198 cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.Orig.BK.07082015
199 vim /etc/neutron/dhcp_agent.ini
200 cp /etc/neutron/dnsmasq-neutron.conf /etc/neutron/dnsmasq-neutron.conf.Orig.Bak.07082015
201 vim /etc/neutron/dnsmasq-neutron.conf
202 pkill dnsmasq
203 cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.Orig.Bak.07082015
204 vim /etc/neutron/metadata_agent.ini
205 vim /etc/nova/nova.conf
206 systemctl restart openstack-nova-api.service
207 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
208 su -s /bin/sh -c “nova-manage db sync” nova
209 sudo systemctl enable openvswitch.service
210 yum install openstack-neutron openstack-neutron-ml2 openstack-neutronopenvswitch
211 yum search openstack-neutron-openvswitch
212 yum install openstack-neutron-openvswitch
213 systemctl enable openvswitch.service
214 systemctl start openvswitch.service
215 ovs-vsctl add-br br-ex
216 ovs-vsctl list-br br-int
217 ovs-vsctl list-br
218 ovs-vsctl add-port br-ex em3
219 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
220 cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
221 sed -i ‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g’ /usr/lib/systemd/system/neutron-openvswitch-agent.service
222 systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
223 systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
224 source admin-openrc.sh
225 neutron agent-list
226 vim /etc/sysctl.conf
227 sysctl -p
228 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
229 vim /etc/nova/nova.conf
230 neutron net-create ext-net –router:external –provider:physical_network external –provider:network_type flat
231 neutron subnet-create ext-net 192.168.71.0/24 –name ext-subnet –allocation-pool start=192.168.71.50,end=192.168.71.60 –disable-dhcp –gateway 192.168.71.254
232 source demo-openrc.sh
233 neutron net-create demo-net
234 neutron subnet-create demo-net 192.168.180.0/24 –name demo-subnet –gateway 192.168.180.254
235 neutron router-create demo-router
236 neutron router-interface-add demo-router demo-subnet
237 neutron router-gateway-set demo-router ext-net
238 vim /etc/nova/nova.conf
239 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
240 nova list
241 neutron agent-list
242 yum install openstack-nova-network openstack-nova-api
243 vim /etc/nova/nova.conf
244 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
245 nova list
246 vim /etc/nova/nova.conf
247 yum install openstack-dashboard httpd mod_wsgi memcached pythonmemcached
248 vim /etc/openstack-dashboard/local_settings
249 setsebool -P httpd_can_network_connect on
250 chown -R apache:apache /usr/share/openstack-dashboard/static
251 systemctl enable httpd.service memcached.service
252 systemctl start httpd.service memcached.service
253 iptables -L
254 netstat -tunlp | grep 80
255 systemctl restart httpd.service memcached.service
256 netstat -tunlp | grep http
257 systemctl status httpd.service
258 vim /etc/openstack-dashboard/local_settings
259 systemctl restart httpd.service memcached.service
260 systemctl status httpd.service
261 ps -ef | grep httpd
262 netstat -tunlp | grep http
263 vim /etc/httpd/conf/httpd.conf
264 systemctl restart httpd.service memcached.service
265 netstat -tunlp | grep http
266 vim /etc/httpd/conf/httpd.conf
267 systemctl status httpd.service
268 vim /etc/httpd/conf/httpd.conf
269 tcpdump -i em1 80
270 tcpdump -i em1 port 80
271 vim /etc/httpd/conf/httpd.conf
272 systemctl restart httpd.service memcached.service
273 netstat -tunlp | grep http
274 service iptables stop
275 /bin/systemctl stop iptables.service
276 systemctl restart httpd.service memcached.service
277 iptables -L
278 /bin/systemctl status iptables.service
279 sestatus
280 setenforce 0
281 sestatus
282 /sbin/iptables -I RH-Firewall-1-INPUT -p tcp -m tcp –dport 80 -j ACCEPT
283 sudo /sbin/iptables -A INPUT -p tcp –dport http -j ACCEPT
284 /sbin/iptables-save > /tmp/fw
285 sudo /sbin/chkconfig iptables off
286 /bin/systemctl start iptables.service
287 iptables -L
288 /sbin/iptables-restore < /tmp/fw
289 iptables -L
290 /bin/systemctl restart iptables.service
291 iptables -L
292 /sbin/iptables-restore < /tmp/fw
293 /sbin/iptables-save
294 sudo /sbin/iptables -A IN_public_allow -p tcp -m tcp –dport 80 -m conntrack –ctstate NEW -j ACCEPT
295 /sbin/iptables-save
296 tail -f /var/log/nova/nova-compute.log
297 source demo-openrc.sh
298 nova keypair-add demo-key
299 nova keypair-list
300 nova flavor-list
301 nova image-list
302 neutron net-list
303 nova secgroup-list
304 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic netid=dde5238c-2d92-4de1-b64c-33f944ff9398 –security-group default –key-name demo-key demo-instance1
305 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic dde5238c-2d92-4de1-b64c-33f944ff9398 –security-group default –key-name demo-key demo-instance1
306 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic demo-net –security-group default –key-name demo-key demo-instance1
307 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=dde5238c-2d92-4de1-b64c-33f944ff9398 –security-group default –key-name demo-key demo-instance1
308 vim /etc/nova/nova.conf
309 sestatus
310 vim /etc/selinux/config
311 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
312 systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
313 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=dde5238c-2d92-4de1-b64c-33f944ff9398 –security-group default –key-name demo-key demo-instance1
314 ls -al
315 chmod -R 777 images/
316 neutron net-create ext-net –router:external True \
317 neutron net-create ext-net –router:external True –provider:physical_network external –provider:network_type flat
318 source admin-openrc.sh
319 neutron net-create ext-net –router:external True –provider:physical_network external –provider:network_type flat
320 neutron net-create ext-net –router:external –provider:physical_network external –provider:network_type flat
321 vim /etc/libvirt/qemu.conf
322 vim /etc/libvirt/libvirtd.conf
323 reboot
324 ses
325 sestatus
326 source admin-openrc.sh
327 neutron agent-list
328 nova service-list
329 keystone service-list
330 keystone user-list
331 keystone service-list
332 systemctl disable firewalld
333 systemctl stop firewalld
334 systemctl status firewalld
335 virsh –connect qemu:///system
336 systemctl status libvirtd
337 lsmod|grep kvm
338 virt-install –os-variant=list
339 lsmod|grep kvm
340 vim /etc/nova/nova.conf
341 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
342 su -s /bin/sh -c “nova-manage db sync” nova
343 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
344 tail -f /var/log/{nova,neutron}
345 tail -f /var/log/nova/nova-scheduler.log
346 tail -f /var/log/nova/nova-scheduler.log /var/log/nova/nova-api.log /var/log/nova/nova-compute.log /var/log/nova/nova-consoleauth.log
347 keystone tenant-list
348 source admin-openrc.sh
349 keystone tenant-list
350 keystone tenant-id
351 ping 192.168.71.101
352 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
353 service mysql restart
354 bin/systemctl restart mysql.service
355 /bin/systemctl restart mysql.service
356 /bin/systemctl status mysql.service
357 /bin/systemctl start mysql.service
358 /bin/systemctl start mysqld.service
359 sudo systemctl start mariadb.service
360 sudo systemctl status mariadb.service
361 vim /etc/nova/nova.conf
362 nova list
363 source admin-openrc.sh
364 nova list
365 nova
366 nova service-list
367 vim /etc/libvirt/libvirtd.conf
368 service libvirtd restart
369 /bin/systemctl restart libvirtd.service
370 systemctl status libvirtd.service
371 systemctl stop libvirtd.service
372 systemctl start libvirtd.service
373 vim /etc/libvirt/libvirtd.conf
374 vim /etc/libvirt/libvirt.conf
375 vim /etc/libvirt/qemu.conf
376 systemctl restart libvirtd.service
377 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
378 nova service-list
379 vim /etc/libvirt/qemu.conf
380 systemctl restart libvirtd.service
381 sudo nano /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla
382 sudo nano /etc/polkit-1/localauthority/50-local.d/50-nova.pkla
383 vim /etc/libvirt/qemu.conf
384 vim /etc/libvirt/libvirtd.conf
385 systemctl restart libvirtd.service
386 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
387 nova service-list
388 systemctl status libvirtd.service
389 systemctl restart openstack-nova-compute.service
390 systemctl status openstack-nova-compute.service
391 nova service-list
392 vim /etc/neutron/neutron.conf
393 nova service-list
394 vim /etc/neutron/neutron.conf
395 vim /etc/nova/nova.conf
396 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
397 systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
398 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –security-group default –key-name demo-key demo-instance1
399 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –security-group default demo-instance1
400 keystone user-role-add –user neutron –tenant service –role admin
401 yum search python-keystoneclient
402 yum install python-keystoneclient.noarch
403 nova –debug boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=dde5238c-2d92-4de1-b64c-33f944ff9398 –security-group default –key-name demo-key demo-instance1
404 neutron net-list
405 nova –debug boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=433bede8-aa74-4e36-a770-09c2286bd201 –security-group default –key-name demo-key demo-instance1
406 tail -f /var/log/nova/nova-api.log
407 tail -f /var/log/neutron/server.log
408 vim /var/log/neutron/server.log
409 nova-manage service-list
410 nova-manage service list
411 vim /etc/nova/nova.conf
412 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service openstack-nova-compute.service
413 nova-manage service list
414 su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
415 su -s /bin/sh -c “nova-manage db sync” nova
416 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service openstack-nova-compute.service
417 systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
418 nova list
419 nova delete eecfdce7-5432-4b59-b3fc-41cf07bb0867
420 vim /etc/nova/nova.conf
421 vim /etc/neutron/neutron.conf
422 su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
423 systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
424 neutron agent-list
425 neutron service-list
426 neutron service list
427 neutron list
428 neutron net-list
429 vim /etc/neutron/neutron.conf
430 systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
431 su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
432 reboot
433 nova delete demo-instance1
434 source admin-openrc.sh
435 nova delete demo-instance1
436 vim /etc/nova/nova.conf
437 vim /etc/neutron/neutron.conf
438 su -s /bin/sh -c “nova-manage db sync” nova
439 locate openstack-nova-api.service
440 updatedb
441 locate openstack-nova-api.service
442 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-compute.service openstack-nova-conductor.service openstack-nova-consoleauth.service openstack-nova-console.service openstack-nova-novncproxy.service openstack-nova-scheduler.service
443 nova service list
444 nova service-list
445 systemctl disable firewalld
446 echo “export OS_IMAGE_API_VERSION=2″ | tee -a admin-openrc.sh demo-openrc.sh
447 nova service-list
448 vim /etc/nova/nova.conf
449 vim /etc/neutron/neutron.conf
450 cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.mod
451 vim /etc/neutron/neutron.conf
452 exit
453 tail -f /var/log/{nova,neutron/*}
454 reboot
455 ifconfig br-ex down
456 ifconfig em3 down ; ifconfig em3 up
457 ifconfig
458 ifconfig
459 route -n
460 vim /etc/sysconfig/network-scripts/ifcfg-em1
461 vim /etc/sysconfig/network-scripts/ifcfg-em3
462 vim /etc/sysconfig/network-scripts/ifcfg-br-ex
463 systemctl restart network
464 ifconfig
465 vim /etc/sysconfig/network-scripts/ifcfg-em3
466 vim /etc/sysconfig/network-scripts/ifcfg-br-ex
467 reboot
468 ifconfig
469 ifconfig
470 vim /etc/sysconfig/network-scripts/ifcfg-em1
471 vim /etc/sysconfig/network-scripts/ifcfg-em3
472 vim /etc/sysconfig/network-scripts/ifcfg-em4
473 iptables -L
474 systemctl status firewalld
475 systemctl stop firewalld
476 iptables -L
477 systemctl stop firewalld
478 systemctl disable firewalld
479 iptables -L
480 lcoate linuxbridge_conf.ini
481 locate linuxbridge_conf.ini
482 updatedb
483 locate linuxbridge_conf.ini
484 sudo iptables -A INPUT ACCEPT
485 sudo iptables -A FORWARD ACCEPT
486 ovs-vsctl show
487 vim /etc/sysconfig/network-scripts/ifcfg-em3
488 vim /etc/sysconfig/network-scripts/ifcfg-br-ex
489 vim /etc/sysconfig/network-scripts/ifcfg-em3
490 vim /etc/sysconfig/network-scripts/ifcfg-br-ex
491 vim /etc/sysconfig/network-scripts/ifcfg-em3
492 ifconfig
493 /etc/init.d/network restart
494 systemctl statis openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service libvirtd.service openstack-nova-compute.service openstack-nova-console.service
495 systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service libvirtd.service openstack-nova-compute.service openstack-nova-console.service
496 source admin-openrc.sh
497 nova service-list
498 nova endpoints | more
499 vim /etc/glance/glance-registry.conf
500 su -s /bin/sh -c “glance-manage db_sync” glance
501 su -s /bin/sh -c “nova-manage db sync” nova
502 nova service-list
503 service glance-registry restart
504 systemctl start openstack-glance-api.service openstack-glance-registry.
505 systemctl status openstack-glance-api.service openstack-glance-registry.service
506 glance image-list
507 vim admin-openrc.sh
508 source admin-openrc.sh
509 nova endpoints
510 vim admin-openrc.sh
511 source admin-openrc.sh
512 nova endpoints
513 vim admin-openrc.sh
514 source admin-openrc.sh
515 vim /etc/nova/nova.conf
516 ifconfig
517 date
518 glance image-list
519 source admin-openrc.sh
520 glance image-list
521 free -m
522 mysql -u root -p
523 vim /etc/nova/nova.conf
524 yum install openstack-nova-compute sysfsutils
525 vim /etc/nova/nova.conf
526 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service libvirtd.service openstack-nova-compute.service
527 systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service libvirtd.service openstack-nova-compute.service
528 nova service-list
529 systemctl status openstack-nova-console.service
530 systemctl enable openstack-nova-console.service
531 systemctl status openstack-nova-console.service
532 systemctl start openstack-nova-console.service
533 systemctl status openstack-nova-console.service
534 nova service-list
535 nova endpoints
536 export OS_REGION_NAME=RegionOne
537 nova endpoints
538 nova –os-region-name RegionOne endpoints
539 vim /etc/nova/nova.conf
540 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service libvirtd.service openstack-nova-compute.service openstack-nova-console.service
541 nova service-list
542 vim /etc/nova/nova.conf
543 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service libvirtd.service openstack-nova-compute.service openstack-nova-console.service
544 vim /etc/nova/nova.conf
545 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service libvirtd.service openstack-nova-compute.service openstack-nova-console.service
546 grep -H -r “virt_type” /etc
547 grep -H -r “virt_type” -v /etc
548 grep -H -r “virt_type” /etc
549 nova image-list
550 mysql -u root -pPayoda2015
551 openstack user list
552 openstack role list
553 openstack service list
554 openstack endpoint list
555 openstack endpoints list
556 openstack endpoint list
557 vim /etc/neutron/neutron.conf
558 vim /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugins/ml2/ml2_conf.ini
559 vim /etc/neutron/plugins/ml2/ml2_conf.ini
560 vim /etc/neutron/neutron.conf
561 vim /etc/nova/nova.conf
562 vim colorscheme
563 vim ~/.vimrc
564 locate .vimrc
565 vim ~/.vimrc
566 vim /etc/nova/nova.conf
567 grep -v “^#” /etc/nova/nova.conf
568 grep -v “^#” /etc/nova/nova.conf | more
569 vim /etc/nova/nova.conf
570 grep -v “^#” /etc/nova/nova.conf | more
571 vim /etc/nova/nova.conf
572 grep -v “^#” /etc/nova/nova.conf | more
573 vim /etc/nova/nova.conf
574 grep -v “^#” /etc/nova/nova.conf | more
575 vim /etc/nova/nova.conf
576 grep -v “^#” /etc/nova/nova.conf | more
577 su -s /bin/sh -c “nova-manage db sync” nova
578 tail -f /var/log/httpd/error_log
579 tail -f /var/log/httpd/access_log
580 tail -f /var/log/httpd/error_log
581 reboot
582 source admin-openrc.sh
583 neutron net-create ext-net –router:external –provider:physical_network external –provider:network_type flat
584 neutron subnet-create ext-net 192.168.71.0/24 –name ext-subnet –allocation-pool start=192.168.71.50,end=192.168.71.150 –disable-dhcp –gateway 192.168.71.254
585 source demo-openrc.sh
586 neutron net-create demo-net
587 neutron subnet-create demo-net 192.168.72.0/16 –name demo-subnet –gateway 192.168.72.1
588 neutron subnet-delete demo-net 192.168.72.0/16 –name demo-subnet –gateway 192.168.72.1
589 neutron subnet-delete demo-net
590 neutron subnet-delete demo-subnet
591 neutron subnet-create demo-net 192.168.72.0/24 –name demo-subnet –gateway 192.168.72.1
592 neutron router-create demo-router
593 neutron router-interface-add demo-router demo-subnet
594 neutron router-gateway-set demo-router ext-net
595 nova keypair-list
596 nova keypair-delete demo-key
597 nova keypair-add demo-key
598 neutron net-list
599 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
600 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=a3ef3dd2-82d5-471f-8cc3-5dad99b4abeb –security-group default –key-name demo-key demo-instance1
601 nova image-list
602 nova list
603 vim /etc/nova/nova.conf
604 su -s /bin/sh -c “nova-manage db sync” nova
605 su -s /bin/sh -c “glance-manage db_sync” glance
606 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
607 su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
608 systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service libvirtd.service openstack-nova-compute.service openstack-nova-console.service
609 systemctl enable neutron-server.service
610 source admin-openrc.sh
611 neutron ext-list
612 sysctl -p
613 vim /etc/sysctl.conf
614 ovs-vsctl list
615 ovs-vsctl list-br
616 ovs-vsctl list-ports br-ex
617 ovs-vsctl list-ports br-int
618 ovs-vsctl list-ports br-tun
619 vim /etc/nova/nova.conf
620 yum search openstack-nova-network openstack-nova-api
621 yum remove openstack-nova-network
622 vim /etc/nova/nova.conf
623 su -s /bin/sh -c “nova-manage db sync” nova
624 reboot
625 source demo-openrc.sh
626 nova keypair-list
627 neutron net-list
628 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
629 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=a3ef3dd2-82d5-471f-8cc3-5dad99b4abeb –security-group default –key-name demo-key demo-instance1
630 nova list
631 nova floating-ip-associate demo-instance1 192.168.71.51
632 neutron floatingip-create ext-net
633 nova floating-ip-associate demo-instance1 192.168.71.51
634 nova list
635 nova keypair-list
636 nova keypair
637 nova keypair-view
638 nova keypair-add demo1-key
639 source demo-openrc.sh
640 nova keypair-list
641 neutron net-list
642 nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=a3ef3dd2-82d5-471f-8cc3-5dad99b4abeb –security-group default –key-name demo1-key demo-instance2
643 nova list
644 neutron floatingip-create ext-net
645 nova floating-ip-associate demo-instance2 192.168.71.52
646 nova list
647 tail -f /var/log/{nova,neutron/*}
648 ifconfig
649 exit
650 vim /etc/sysconfig/network-scripts/ifcfg-em1
651 vim /etc/sysconfig/network-scripts/ifcfg-em3
652 vim /etc/sysconfig/network-scripts/ifcfg-br-ex
653 exit
654 vim /etc/sysconfig/network-scripts/ifcfg-em4
655 vim /etc/my.cnf
656 ping www.google.com
657 netstat -nr\
658 netstat -nr
659 arch
660 reboot
661 grep “^[^#;]” /etc/nova/nova.conf
662 grep “^[^#;]” /etc/keystone/keystone.conf
663 vim /etc/httpd/conf/httpd.conf
664 grep “^[^#;]” /etc/glance/glance-api.conf
665 grep “^[^#;]” /etc/nova/nova.conf
666 grep “^[^#;]” /etc/neutron/neutron.conf
667 grep “^[^#;]” /etc/neutron/plugins/ml2/ml2_conf.ini
668 grep “^[^#;]” /etc/neutron/l3_agent.ini
669 grep “^[^#;]” /etc/neutron/dhcp_agent.ini
670 grep “^[^#;]” /etc/neutron/metadata_agent.ini
671 history > history.txt
Create New Project
source admin-openrc.sh
openstack project create –description “Eventful-lab” eventfullab
openstack user create –password-prompt eventfullab
openstack role add –project eventfullab –user eventfullab user
source eventfullab-openrc.sh
neutron net-create eventfullab-net
vim eventfullab-openrc.sh
source eventfullab-openrc.sh
neutron net-create eventfullab-net
neutron subnet-create eventfullab-net 192.168.74.0/24 –name eventfullab-subnet –gateway 192.168.74.1
neutron router-interface-add eventfullab-router eventfullab-subnet
source eventfullab-openrc.sh
neutron router-create eventfullab-router
neutron router-interface-add eventfullab-router eventfullab-subnet
neutron router-gateway-set eventfullab-router ext-net
nova keypair-list
nova keypair-add eventful-key
neutron net-list
nova boot –flavor m1.tiny –image cirros-0.3.4-x86_64 –nic net-id=04226512-0ebc-4203-8048-7a5832a780cb –security-group default –key-name eventful-key eventful-instance1
nova list
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
No comments:
Post a Comment