At the end of Part2 we got a working OpenStack environment composed by three nodes: 1 controller node, 1 network node and 1 compute node.
In this post we will access the Horizon dashboard to configure networking services for tenants’ VMs.
Accessing the Horizon dashboard
Horizon dashboard, the console that is used to manage the infrastructure, can be accessed by pointing the hostname of the controller node. In my case:
During the installation process we didn’t set the password for the admin user, so this password has been automatically generated together with the answer file.
In root’s home directory you will find the file keystonerc_admin. This file contains login information and, if sourced (with the command source or with . keystonerc_admin), it can set the environment variables needed to work with the CLI.
unset OS_SERVICE_TOKEN export OS_USERNAME=admin export OS_PASSWORD=kHFvoENRxSU2 export OS_AUTH_URL=http://10.0.0.10:5000/v2.0 export PS1='[[email protected] W(keystone_admin)]$ ' export OS_TENANT_NAME=admin export OS_REGION_NAME=RegionOne
Copy the password and login with user admin.
Once you are logged in, you can change the password:
Do not forget to update the keystonerc_admin file with the new password.
Let’s move to the Admin tab. Here, under System, you will find a lot of useful informations about your OpenStack environment. For example: in System Information tab you can verify the services status:
In this case, the service nova-compute (responsible to running and manage VMs) is up and running on the compute node.
Configuring the networks
Now we are going to configure Neutron, the network module that provides networking as a service in OpenStack environments.
To better understand what we are going to do, this is how the network layout will look like:
OpenStack uses different types of networks, you should select the right one depending on your needs. Here is a short brief about the different network types:
- local: network that can only be configured on a single compute node.
- flat: network that does not provide any segmentation option. Basically a traditional layer 2 network.
- vlan: network that uses VLAN IDs to provide segmentation. Using this network requires that every physical switch in the infrastructure is configured to trunk the corresponding VLAN.
- GRE and VXLAN: they are both overlay networks. They encapsulate traffic in tunnel with an unique ID. Unlike traditional VLANs, they do not require to be synced to layer 2 physical switches.
Soon in this post we will be dealing with them, so it is important to understand how they work and what are the main differences and purposes.
Now we are going to configure the private network used by tenant’s VMs, the provider network used to reach VMs from outside the OpenStack infrastructure and an internal virtual router that connects these two networks.
In Part2 we already configured the physical interface of the network node on this network: this interface has an IP assigned (10.2.0.15/24) and it has been bridged to br-ex.
All the traffic, from and to tenant’s VM, will pass through this physical interface.
Now we need to define how the provider network is composed. Move to System/Networks and click ‘Create Network’. Choose VXLAN as network type, 0 (zero) as segmentation ID and flag ‘External Network’.
The network is now defined, click on its name and, from the overview page, click create subnet. Insert the network address in CIDR notation and the gateway IP.
If you don’t have any DHCP server working in this network, Neutron can help: we can define a range and it will assign IP addresses at any port/device connected to this network:
Tenant private network
This is the network where tenant’s VMs will run and it is completely virtual: the only way to go outside this network is through a virtual router (that will be configured in the next steps).
Move to Project/Network/Networks and click ‘Create Network’:
Choose an appropriate IP range. I used 192.168.1.0/24 and the gateway for this subnet will be 192.168.1.1. This IP address will be soon associated to the router that will connect this private subnet with the provider (public) network.
If you do not plan to install a DHCP server on this network, configure the internal DHCP daemon.
This is a VMs dedicated network, so here you can use a wider range:
The last step is to create the router that will connect these two networks.
Move to Project/Networks/Routers and click ‘Create Router’ and connect it to the provider network:
Click on router name, move to interfaces tab and click add interface. We are going to add the second interface connected to the tenant network, so select the subnet and set the gateway address.
As you can see now we have two interfaces for this router. One connected to the tenant network, with IP 192.168.1.1 and one, connected to the provider network, with IP assigned by DHCP:
Now the infrastructure is ready to run VMs. In the next post I will show you how to deploy a VM from a cloud image, a basic installation of a Linux OS packed in a single and bootable file.