In this post we will configure a VM, that will run in tenant’s network and will be accessible from the provider’s network.
Some Linux distributions come already packed in a very small image file, ready to be deployed in an OpenStack environment.
One of the great advantages in using the cloud images is that you don’t need to install the OS from scratch and the package cloud-init is already installed. This package supports the SSH key pair and user data injection. Many of the images disable SSH login with password so, later in this post, we will import into OpenStack the SSH public key of our client to login in the fresh deployed VM.
As Linux OS we will use Fedora core 24 cloud.
Why Fedora?It comes with a very small footprint and provides a complete OS. Unlike CentOS, it requires less CPU and RAM to be executed. It is perfect for a lab environment.
If you would like to use a different OS, here you can find a complete list of cloud images from the OpenStack official site.
Before starting it is important to understand how virtual machine networking works and some terminology.
Virtual Machine networking
Basically, a virtual machine running in a tenant dedicated networks can communicate only through a public (provider) network.
The primary difference between those two networks revolves around who provisions them.
Provider networks are defined by administrators and can be dedicated to a specific tenant or shared between two or more tenants.
Tenant networks can be created by tenants and are used by their VMs (instances – in OpenStack terminology).
Provider networks are associated with a physical interface and they can be provisioned using an overlay protocol, like VLAN, VXLAN and GRE. On the other hand, tenant networks relay only on Neutron routers and they are private.
To access a VM in a tenant network you need to assign to a given VM a floating IP, an IP reserved in a pool. During the network configuration in Part2 we configured a DHCP range to reserve IPs in this pool.
Whenever a connection is open to a tenant VM, like an SSH connection, the network node use NAT rules to route traffic to instances. Outgoing traffic (VM to provider network) will use SNAT rules to an IP defined in the provider network and incoming traffic (provider network to VM) will use DNAT to an internal IP defined in tenant network.
Difference between private and floating IP (from RDO pages)
A private IP address is assigned to an instance’s network-interface by the DHCP server. The address is visible from within the instance by using a command like “ip a”. The address is typically part of a private network and is used for communication between instances in the same broadcast domain via virtual switch (L2 agent on each compute node). It can also be accessible from instances in other private networks via virtual router (L3 agent).
A floating IP address is a service provided by Neutron. It’s not using any DHCP service or being set statically within the guest. As a matter of fact the guest’s operating system has no idea that it was assigned a floating IP address. The delivery of packets to the interface with the assigned floating address is the responsibility of Neutron’s L3 agent. Instances with an assigned floating IP address can be accessed from the public network by the floating IP.
Now we can start working on the first VM in this infrastructure! Let’s start with the flavor.
Creating the flavor
In OpenStack, virtual hardware template are called “flavors” and they define cores number, disk size, amount of RAM, etc… for a VM.
The default installation provides some flavors already configured.
Just to better understand the whole process I created a new flavor with some basic parameters.
Under System/Flavors click ‘Create Flavor’:
Importing an image
The OpenStack Image Service is called Glance and it provides registration, discovery and delivery services for disk and server images. It has the ability to copy (or snapshot) a server image and then to store it promptly. Stored images then can be used as templates to get new servers up and running quickly.
VM images can be stored in various locations, including simple filesystems (like in this environment) or object-storage systems, such as Swift.
Now we are going to import Fedora image via internet.
Move to Project/Compute/Images and click ‘create image’. In image location paste this link.
Let’s wait a couple of minutes until the image is downloaded and saved.
One of the capability of Nova (compute module) is to provide also an embedded system for firewall and security services. It is possible to create various rules that can allow (or deny) certain types of traffic from or to tenant’s VMs.
By default there are some limitation for the incoming traffic. We are in a lab environment so we can remove them and allow all protocols.
Move to Project/Compute/Access & Security, then to Security Group tab, edit the default security group and delete existing Ingress rules.
Now we need to create two new ingress rules, one for IPv4 and one for IPv6. Fill all the fields with the following:
By default the cloud-init package, installed into cloud images, disables the SSH login with password. Before creating a VM we need to import our SSH public key to be able to login in the VM.
From the client you will use to connect to the VM, if you do not have it already, create an SSH public key with this command:
[email protected] ~ % ssh-keygen -t rsa
Generating public/private rsa key pair. Enter file in which to save the key (/home/andrea/.ssh/id_rsa): Created directory '/home/andrea/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/andrea/.ssh/id_rsa. Your public key has been saved in /home/andrea/.ssh/id_rsa.pub.
Move to Project/Compute/Access & Security, Key Pairs tab and click ‘Import Key Pair’. Your SSH public key is stored in .ssh/id_rsa.pub.
Deploying a VM
Finally, everything is ready to deploy a VM!
Create the instance
Move to Compute/Instances and click ‘Launch Instance’ and insert a name for the VM (this will be also the hostname):
Use as source the imported image in Glance:
Assign a flavor:
Connect the VM to the tenant’s network:
Apply the default security group:
Assign your SSH key and click ‘Launch Instance’:
In a couple of minute your new instance will be ready:
Assign a floating IP
Just one step left: assign a floating IP to access the deployed VM.
Select the VM and click ‘Associate Floating IP’ then click on ‘+’:
The pool is composed by every IPs in DHCP range configured for the provider network.
Select the pool:
Now the VM can be reached through the IP 10.2.0.22
Test the connectivity and login:
Ping the VM:
[email protected] ~ % ping -c 4 10.2.0.22
PING 10.2.0.22 (10.2.0.22) 56(84) bytes of data. 64 bytes from 10.2.0.22: icmp_seq=1 ttl=62 time=1.42 ms 64 bytes from 10.2.0.22: icmp_seq=2 ttl=62 time=1.98 ms 64 bytes from 10.2.0.22: icmp_seq=3 ttl=62 time=1.97 ms 64 bytes from 10.2.0.22: icmp_seq=4 ttl=62 time=1.97 ms --- 10.2.0.22 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 1.422/1.838/1.984/0.242 ms
The default user for the cloud image is ‘fedora’. Login with:
[email protected] ~ % ssh [email protected]
Last login: Thu Sep 8 19:21:35 2016 from 10.0.0.99 [[email protected] ~]$
The VM is now accessible from outside and it is fully working. Enjoy your new OpenStack lab!