Category Archives: Uncategorized

Why Containers for OpenStack Services?

An interesting problem to solve in OpenStack is the management of OpenStack’s services. Whether it’s at the time of provisioning or updating, the OpenStack services could listen on similar ports and require modification of common configuration files.

Because of this, the services could potentially conflict with one another if deployed on the same system. For example, the network service may attempt to listen on the same port as the identity service or the compute service may edit a file that the network service expects to have different values. How do you deal with this problem, particularly when each OpenStack project has a tendency to work as an independent project? It doesn’t seem likely that it would be easy to drive consensus between the various projects on ports to listen on, configuration files to modify – particularly with the speed that OpenStack is moving.

For example, let’s suppose that one wants to deploy a network service. Assuming they are using a build based (sometimes referred to as package based) deployment method they might perform something similar to the following.




The result is a non-working network service and the potential for a non-working identity service if it is ever restarted. This problem is also found in image based deployment, it’s simply found earlier in the workflow, during the image generation phase. After all, the images that are being deployed need to be generated in the first place. The fundamental problem is that understanding what services are deployed on a particular host and resolving the dependencies or making necessary changes is not something the package or image generation tools understand.

One possible solution is to place each service on it’s own unique piece of hardware. This solves the problem of conflicts between the services configuration, but is not optimal as the overhead of the OpenStack services would not justify it’s own physical system until a particular scale is reached. Even then, locating the services close to compute nodes would also inhibit providing each service it’s own dedicated piece of hardware.

Another possible solution is to build into the tools, the logic and understanding of the OpenStack services and their configuration. While this sounds like a small task – it is not. The possible combinations of services that could be combined on a single host does not lend itself to easily creating let alone maintaining this logic.

OpenStack Architecture

OpenStack Architecture

Yet another possible solution is to utilize virtual machines. This solves the hardware problem and provides isolation, but it has some disadvantages. Virtual machines are heavyweight. Whether it’s building new virtual machine images because of a simple update or installing the configuration infrastructure necessary to update virtual machines (and the overhead of start/stop operations, less rich interfaces for metadata, etc) virtual machines are not ideal.




It may be possible to use Linux containers to solve this problem. Linux containers offer a lightweight virtualization that provide (among other things) process and network isolation. The isolation provided by containers means that tools such as a build based or image based deployment tools don’t need to maintain the logic of how the services on hosts could be deployed or updated without effecting one another.


I hope to provide more information soon on how projects like systemd might provide a mechanism for solving dependencies between OpenStack services running in containers – maybe even using Docker. Also, how ostree might lend a hand in some of the troubles of package management too.

Building Docker Images on Fedora

This page captures my effort to learn about docker images by building a docker image for ovirt-engine from scratch using Fedora 19. At this point I get stuck after launching the image with ovirt installed in it. I’ll be troubleshooting and seeing how I can best package ovirt-engine into a single image or breaking into multiple pieces. Who knows, maybe I’ll even try to make it communicate over etcd?

I was able to create a new base image, publish it to a private docker registry, then create a Dockerfile to create a layered image for ovirt-engine, the open source virtualization management platform. I used Marek Goldmann‘s great blog as a reference and leveraged the work of Matt Miller too.

Setup your System

On a Fedora 19 system install the necessary packages.

Install docker-io docker-registry. Docker automates deployment of containerized applications while docker-registry provides the docker registry server for sharing of docker images.
# yum install -y docker-io docker-registry --enablerepo=updates-testing

Install appliance-tools. Appliance tools is one method that can be used for creating a virtual machine that we will then package up into a docker image.
# yum install -y appliance-tools libguestfs-tools

Enable and start the docker and docker-registry services.
# systemctl enable docker
# systemctl start docker
# systemctl enable docker-registry
# systemctl start docker-registry

You may also want to unmount /tmp if you are running in a VM and have limited space in /tmp.
# systemctl mask tmp.mount; reboot

Build a Base Image

In order to build a base image you need to create a virtual machine image, then pack it up into an archive, and import it into docker.

You can use your favorite kickstart file for your base docker image. You would want to make the kickstart install the smallest possible footprint so your base image stays small. The following example kickstart is a good starting point.

appliance-creator can be used to automatically install a virtual machine using the kickstart file.
# appliance-creator -c mykickstart.ks -d -v -t /tmp \
-o /tmp/myimage --name "fedora-image" --release 19 \

virt-tar-out creates a tar file from a virtual machine image.
# virt-tar-out -a /tmp/myimage/fedora-image/fedora-image-sda.qcow2 / - |
docker import - jlabocki f19

You can download the script and container-small-19.ks kickstart which will help you with automating the building of a basic container.

If you have issues creating a container you can continue on by pulling an existing image, like Matt’s fedora image, from the Docker index.
# docker pull mattdm/fedora

Publish the New Image to a Docker Registry

Docker provides a registry, a place to store your docker images (web server that supports multiple storage back-ends and has hooks for authentication sources). The company behind docker provides an index which is the docker-registry combined with a web front end and collaborative environment.

Now that we have a docker image we can upload it to our private registry. First you’ll need to list the images, tag the image
# docker images
none latest e4a4f6d69590 29 hours ago 131.2 MB

# docker tag e4a4f6d69590 localhost.localdomain:5000/fedora-small
# docker push localhost.localdomain:5000/fedora-small

Create a New Dockerfile

Now let’s try to create a new image based on the base image. We will create a new directory and create a dockerfile.
# mkdir ovirt; cd ovirt; vi Dockerfile

A dockerfile accepts a bunch of options. We will use only a few in ours.

# Base on the Fedora image created by Matthew
FROM localhost.localdomain:5000/fedora-small

# Install the JBoss Application Server 7
#RUN yum install -y jboss-as
RUN yum localinstall -y
RUN yum install -y ovirt-engine
RUN yum install -y ovirt-engine-setup-plugin-allinone
RUN yum install -y wget
#RUN wget -O /root/answerfile

# Run the JBoss AS after the container boots
# ENTRYPOINT /usr/bin/engine-setup --config=/root/answerfile

The FROM line indicates what base image should be used.
The RUN lines will be executed and committed on the image.
The ENTRYPOINT line specifies what should be executed when the image is launched. At this point I’ll leave the ENTRYPOINT commented out. We’ll just launch a shell and then try to execute the engine-setup command before we use an answerfile to install it automatically in a future image.

Now we will build our image.
# docker build .

Now we have a new image.
# docker images

We can tag this new image.
# docker tag 234ad73r7df localhost.localdomain:5000/ovirt-fedora-small

And we can push it to our registry as a new image.
# docker push localhost.localdomain:5000/ovirt-fedora-small

On another fedora 19 system with docker installed (or on the same one), you can pull the docker image down and run it.
# docker pull youripaddress:5000/ovirt-fedora-small
# docker run -i -t localhost.localdomain:5000/jlabocki/fedora-ovirt-small /bin/bash

You can run `docker help run` to understand the options that we just gave to run the image. You can also inspect the images and running containers to get lots of interesting information about it (from outside the container, not from within it). `docker ps a` will list the running containers while `docker images` will list the images you have.

From within the container, let’s try to run the engine-setup command and see how far we get …
bash-4.2# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219122235.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ ERROR ] Failed to execute stage 'Environment setup': Command 'initctl' is required but missing
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219122235.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed

It looks like the ovirt-engine is looking for initctl, or at least that is the error it is throwing. Let’s see if we can fool the engine-setup command into thinking it exists.

# ln -s /usr/sbin/service /usr/bin/initctl

Re-running engine-setup
bash-4.2# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219122459.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
Disabling all-in-one plugin because hardware supporting virtualization could not be detected. Do you want to continue setup without AIO plugin? (Yes, No) [No]: Yes

--== PACKAGES ==--

[ INFO ] Checking for product updates...
[ INFO ] No product updates found



Host fully qualified DNS name of this server [502fbe26fc3c]:
[WARNING] Host name 502fbe26fc3c has no domain suffix
[WARNING] Failed to resolve 502fbe26fc3c using DNS, it can be resolved only locally


Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:


Engine admin password:
Confirm engine admin password:
[ ERROR ] Failed to execute stage 'Environment customization': [Errno 2] No such file or directory: '/usr/share/cracklib/pw_dict.pwd'
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219122459.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed

Here is the output of ovirt-engine-setup-2013.log. It appears that I’m missing some files for password. It turns out the password dictionary file was missing, but it was just compressed. Let’s uncompress it and see if we can re-run engine-setup.
bash-4.2# gzip -d /usr/share/cracklib/pw_dict.pwd.gz
bash-4.2# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219125142.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
Disabling all-in-one plugin because hardware supporting virtualization could not be detected. Do you want to continue setup without AIO plugin? (Yes, No) [No]: Yes

--== PACKAGES ==--

[ INFO ] Checking for product updates...
[ INFO ] No product updates found



Host fully qualified DNS name of this server [502fbe26fc3c]:
[WARNING] Host name 502fbe26fc3c has no domain suffix
[WARNING] Failed to resolve 502fbe26fc3c using DNS, it can be resolved only locally


Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:


Engine admin password:
Confirm engine admin password:
[WARNING] Password is weak: it is based on a dictionary word
Use weak password? (Yes, No) [No]: Yes
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:


Organization name for certificate [Test]:


Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:



[ INFO ] Stage: Setup validation
[ ERROR ] Failed to execute stage 'Setup validation': Database configuration was requested, however, postgresql service was not found. This may happen because postgresql database is not installed on system.
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219125142.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed


At this point the engine-setup command is not able to complete successfully because of a dbus error when trying to initialize postgresl-server. I’ll continue to work on this to see if I can make progress in packaging ovirt-engine into a docker image.

OpenStack Summit Hong Kong Presentation and Demonstrations

Oleg and I will be presenting these slides and videos demonstrating CloudForms OpenStack support today at the Hong Kong Summit:

  • Adding OpenStack as a provider OGG MOV
  • Reporting on OpenStack OGG MOV
  • Chargeback for OpenStack OGG MOV
  • Self-Service of OpenStack Instances OGG MOV

Setting up RHELOSP and Ceilometer for use with CloudForms 3

This assumes a RHEL6.4 @base installation of Red Hat Enterprise Linux OpenStack Platform (RHELOSP) and registration to a satellite which has access to both RHELOSP Channels and RHEL Server Optional. Much of the Ceilometer installation instructions came from this Fedora QA Test case, but I made a few changes and added a few more details.

Setup a RHELOSP system and configure ceilometer.

# sudo rhn-channel --add -c rhel-x86_64-server-6-ost-3 -c rhel-x86_64-server-optional-6
# sudo yum update -y
# sudo reboot

sudo yum install *ceilometer*

The mongoDB store also must be installed and started:

sudo yum install mongodb-server
sudo sed -i '/--smallfiles/!s/OPTIONS=\"/OPTIONS=\"--smallfiles /' /etc/sysconfig/mongod
sudo service mongod start

Create the appropriate users and roles:

SERVICE_TENANT=$(keystone tenant-list | grep services | awk '{print $2}')
ADMIN_ROLE=$(keystone role-list | grep ' admin ' | awk '{print $2}')
CEILOMETER_USER=$(keystone user-create --name=ceilometer \
--tenant_id $SERVICE_TENANT \ | awk '/ id / {print $4}')
RESELLER_ROLE=$(keystone role-create --name=ResellerAdmin | awk '/ id / {print $4}')
ADMIN_ROLE=$(keystone role-list | awk '/ admin / {print $2}')
for role in $RESELLER_ROLE $ADMIN_ROLE ; do
keystone user-role-add --tenant_id $SERVICE_TENANT \
--user_id $CEILOMETER_USER --role_id $role

Set the authtoken config appropriately in the ceilometer config file:

sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_host
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_port 35357
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_protocol http
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name services
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password $SERVICE_PASSWORD

Set the user credentials config appropriately in the ceilometer config file:

sudo openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_auth_url
sudo openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_tenant_name services
sudo openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_password $SERVICE_PASSWORD
sudo openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_username ceilometer

Then start the services:

for svc in compute central collector api ; do
sudo service openstack-ceilometer-$svc start

Finally, register an appropriate endpoint with the service catalog. Be sure to replace $EXTERNALIFACE with the IP address of your external interface.

keystone service-create --name=ceilometer \
--type=metering --description="Ceilometer Service"
CEILOMETER_SERVICE=$(keystone service-list | awk '/ceilometer/ {print $2}')
keystone endpoint-create \
--region RegionOne \
--service_id $CEILOMETER_SERVICE \
--publicurl "http://$EXTERNALIFACE:8777/" \
--adminurl "http://$EXTERNALIFACE:8777/" \
--internalurl "http://localhost:8777/"

# sudo iptables -A INPUT -p tcp -m multiport --dports 8777 -m comment --comment "001 ceilometer incoming" -j ACCEPT
# sudo iptables save

# openstack-status
# for svc in compute central collector api ; do
sudo service openstack-ceilometer-$svc status

At this point you can verify ceilometer is working correctly by authenticating as a user that has instances running (such as admin).

# . ~/keystonerc_admin

Then list the sample for the cpu meter. I pipe this to count lines below and just check to see that the value changes every few minutes depending on what is specified in the /etc/ceilometer/pipeline.yaml (the interval is 600 seconds by default)

ceilometer sample-list -m cpu |wc -l

Add the provider to CloudForms Management Engine and you begin seeing capacity and utilization data for your instances populate in a few minutes.

OpenStack Support Included in Arrival of CloudForms 3

CloudForms 3 has arrived! There are plenty of new features, including deeper integration with Amazon Web Services EC2 and enhanced service catalog definitions. Along with those, one major new capability is support of OpenStack as a cloud provider. This is a big step forward in bringing the same cloud management capabilities users have come to expect from CloudForms across VMware vSphere, AWS EC2, and Red Hat Enterprise Virtualization to OpenStack. Before diving directly to the capabilities CloudForms provides for OpenStack providers it’s important to know that Red Hat is working on enabling OpenStack for Enterprises in a number of ways. Here are three key areas:

Enabling Red Hat Enterprise Linux to be the most stable, secure, and best performing platform for OpenStack powered clouds.
This is being accomplished in Red Hat Enterprise Linux OpenStack Platform – a stable, reliable, and secure base, and the hardware and application support needed to run in demanding OpenStack environments.

Enabling instrumentation and APIs within OpenStack.
This occurs upstream within the OpenStack project itself. Red Hat works with the community on projects such as TripleO, an installation and operations tool for OpenStack. It has also led by initiating Tuskar – a stateful API and UI for managing the deployment of OpenStack, which is now a part of the TripleO project.

Supporting OpenStack within CloudForms Red Hat’s Hybrid Cloud Management Platform.
Most IT organizations are already virtualizing and building cloud like capabilities on top of datacenter virtualization (self-service, chargeback, etc). These organizations recognize that building a private cloud using OpenStack will provide new advantages such as reducing costs, increasing scale, and fundamentally changing the way developers and operations teams work together. However, IT organizations don’t want to build yet another silo. They’d like to solve the fundamental problem of IT complexity while simultaneously building their next generation IT architecture. CloudForms allows organizations to operationally manage their existing platforms alongside their next generation IT architectures, including OpenStack.

With the OpenStack management background out of the way let’s look at some highlights of what CloudForms 3 brings to OpenStack management in more detail.

Manage New and Existing OpenStack Clouds

CloudForms 3 allows users to manage new and existing OpenStack Environments. As I mentioned in an earlier post infrastructure providers such as VMware and Red Hat Enterprise Virtualization (RHEV) have been separated from Cloud Providers such as Amazon Web Services and OpenStack within the user interface. Within the Cloud Providers screen it’s possible to add a new cloud provider.


After providing the credentials of an OpenStack keystone user CloudForms 3 will discover the Availability Zones, Flavors, Security Groups, Instances, and Images associated with the OpenStack user.


Each of these discovered properties of the OpenStack provider can be inspected further. With instances in particular, the CloudForms user can begin viewing in depth information about the instances running on top of OpenStack.


Users can dive into capacity and utilization data of their openstack instances.


Since CloudForms is also pulling events from the OpenStack message bus it is possible to correlate performance information on instances with events that are taking place.


All of this performance and utilization data is also available for reporting purposes in the CloudForms reporting engine.

Chargeback for Workloads on OpenStack

CloudForms 3 adds OpenStack to a growing list of providers for which chargeback reports can be centrally managed. Using the rate table and tagging functions that already exist in CloudForms users can create rate tables and assign them to their OpenStack environments.



The tagging system continues to provide a flexible and dynamic approach to chargeback which is becoming even more critical as IT organizations build more dynamic platforms with higher rates of change. Chargeback reports can be limited to only show instances or can be combined with virtual machine chargeback.


Provision workloads via self-service catalogs to OpenStack clouds

Finally, CloudForms 3 provides access to instances in OpenStack providers via self-service in it’s service catalog. While self-service of images is a native feature of Horizon within Red Hat Enterprise Linux OpenStack Platform the inclusion of self-service via CloudForms helps organizations looking to implement enterprise class self-service that ties into their existing environments. CloudForms self-service capabilities are integrated with it’s automation engine which bring capabilities such as the abilities to:

  • Combine multiple instances or combined instances with virtual machines and other atomic services into a single service catalog bundle for ordering
  • Integrate with existing IT Operations Management solutions, such as CMDBs, CMS, monitoring, or eventing tools
  • Enforce quotas, workflow, and approval
  • Provide best fit placement of instances on particular OpenStack providers



CloudForms 3 is a big step forward for enterprises looking to manage their OpenStack private clouds through a cloud management platform that also supports their existing investments in datacenter virtualization and public clouds. If you are attending OpenStack Summit I hope you can join Oleg Barenboim, Senior Director of Software Engineering for CloudForms, and myself as we present on how CloudForms Unifies the management of OpenStack, Datacenter Virtualization, and Public Clouds.

Deploying OpenShift with CloudForms Presentation

Slides from my talk on Deploying OpenShift with CloudForms can be downloaded here.

OpenStack Packstack Installation with External Connectivity

Packstack makes installing OpenStack REALLY easy. By using the –allinone option you could have a working self-contained RDO installation in minutes (and most of those minutes are spent waiting for packages to install). However, the –allinone option really should be renamed to the –onlywithinone today, because while it makes the installation very simple it doesn’t allow for instances spun up on the resulting OpenStack environment to be reachable from external systems. This can be a problem if you are trying to both bring up an OpenStack environment quickly and demonstrate integration with systems outside of OpenStack. With a lot of help and education from Perry Myers and Terry Wilson on Red Hat’s RDO team I was able to make a few modifications to the packstack installation to allow a user to use the packstack installation with –allinone and have external access to the instances launched on the host. While I’m not sure this is the best practice for setup here is how it works.

I started with a @base kickstart installation of Red Hat Enterprise Linux 6.4. First, I subscribed the system via subscription manager and subscribed to the rhel server repository. I also installed the latest RDO repository file for Grizzly and then updated the system and installed openvswitch. The update will install a new kernel.

# subscription-manager register
# subscription-manager list --available |egrep -i 'pool|name'
# subscription-manager attach --pool=YOURPOOLIDHERE
# rpm -ivh
# yum -y install openvswitch
# yum -y update

Before I rebooted I setup a bridge named br-ex by placing the following in /etc/sysconfig/network-scripts/ifcfg-br-ex.


I also changed the setup of the eth0 interface by placing the following in /etc/sysconfig/network-scripts/ifcfg-eth0. The configuration would make it belong to the bridge we previously setup.


At this point I rebooted the system so the updated kernel could be used. When it comes back up you should have a bridged interface named br-ex which has the IP address that was associated with eth0. I had a static leased DHCP entry for eth0 prior to starting, so even though the interface was set to use DHCP as it’s bootproto it receives the same address consistently.

Now you need to install packstack.

# yum -y install openstack-packstack

Packstack’s installation accepts an argument named quantum-l3-ext-bridge.

The name of the bridge that the Quantum L3 agent will
use for external traffic, or ‘provider’ if using
provider networks

We will set this to eth0 so that the eth0 interface is used for external traffic. Remember, eth0 will be a port on br-ex in openvswitch, so it will be able to talk to the outside world through it.

Before we run the packstack installer though, we need to make another change. Packstack’s –allinone installation uses some puppet templates to provide answers to the installation options. It’s possible to override the options if there is a command line switch, but packstack doesn’t accept arguments for everything. For example, if you want to change the floating IP range to fall in line with the network range your eth0 interface supports then you’ll need to edit a puppet template by hand.

Edit /usr/lib/python2.6/site-packages/packstack/puppet/modules/openstack/manifests/provision.pp and change $floating_range to a range that is suitable for the network eth0 is on. The floating range variable appears to be used for assigning the floating IP address pool ranges by packstack when –allinone is used.

One last modification before we run packstack, and thanks to Terry Wilson for pointing this out, we need to remove a a firewall rule that is added during the packstack run that adds a NAT rule which will effectively block inbound traffic to a launched instance. You can edit /usr/lib/python2.6/site-packages/packstack/puppet/templates/provision.pp and comment out the following lines.

firewall { '000 nat':
  chain  => 'POSTROUTING',
  jump   => 'MASQUERADE',
  source => $::openstack:rovision::floating_range,
  outiface => $::gateway_device,
  table => 'nat',
  proto => 'all',

The ability to configure these via packstack arguments should eventually make it’s way into packstack. See this Bugzilla for more information.

That’s it, now you can fire up packstack by running the following command.

packstack --allinone —quantum-l3-ext-bridge=eth0

When it completes it will tell you that you need to reboot for the new kernel to take effect, but you don’t need to since we already updated after running yum update with the RDO repository in place.

Your openvswitch configuration should look roughly like this when packstack finishes running.

# ovs-vsctl show
    Bridge br-int
        Port "tap46aaff1f-cd"
            tag: 1
            Interface "tap46aaff1f-cd"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qvod54d32dc-0b"
            tag: 1
            Interface "qvod54d32dc-0b"
        Port "qr-0638766f-76"
            tag: 1
            Interface "qr-0638766f-76"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-3f967843-48"
            Interface "qg-3f967843-48"
                type: internal
        Port "eth0"
            Interface "eth0"
    ovs_version: "1.11.0"

Before we start provisioning instances in Horizon let’s take care of one last step and add two security group rules to allow ssh and icmp to our instances.

# . ~/keystonerc_demo 
# nova secgroup-add-rule default icmp -1 -1
# nova secgroup-add-rule default tcp 22 22

Now you can log into horizon with the demo user whose credentials are stored in /root/keystonerc_demo and provision an instance. Make sure you specify the private network for this instance. The private network is automatically created for the demo tenant by the packstack –allinone installation. You’ll also notice it uploaded an image named cirros into glance for you. Of course, this assumes you’ve already created a keypair.

Screen Shot 2013-08-23 at 10.36.44 PM

Screen Shot 2013-08-23 at 10.36.55 PM

Once the instance is launched we will then associate a floating IP address with it.

Screen Shot 2013-08-23 at 10.44.37 PM

Screen Shot 2013-08-23 at 10.44.50 PM

Screen Shot 2013-08-23 at 10.45.00 PM

Now we can ssh to it from outside the

$ ssh cirros@
cirros@'s password: 
$ uptime
 00:52:20 up 14 min,  1 users,  load average: 0.00, 0.00, 0.00
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Now we can get started with the fun stuff, like provisioning images from CloudForms onto RDO and using Foreman to automatically configure them!

Using the CloudForms Web Services API

The web services API provided by CloudForms Management Engine allows users to integrate external systems with CloudForms. For example, if you wanted an existing change control system to request services from a virtualization provider or public cloud you could call the CloudForms SOAP API to initiate the virtual machine provisioning request method. Keep in mind that automate methods within CloudForms can be used to do just about anything from opening a new incident in a change management system to checking the weather in your favorite city. In this post, however, I’ll provide a simple example of how the Savon soap client and Ruby can be used to make a request to CloudForms to launch a virtual machine.

First you’ll need to install a few ruby gems on your system if you don’t already have them.

# gem install savon
# gem install httpclient
# gem install openssl
# gem install httpi
# gem install pp
# vi myscript.rb

The first section of the script will specify the interpreter and import the gems we installed via require.


require 'savon'
require 'httpi'
require 'httpclient'
require 'openssl'
require 'pp'

The next section will define a ruby module named DCA which contains a class named Worker, or a collection of methods and constants. The module will help provide a namespace so the class name doesn’t clash with an already existing class that may have a similar name if you include this code in a larger body of ruby. We will also define two methods. The build_automation_request method will handle executing the request against the CloudForms Management Engine Web Services API. It accepts a hash that it will pass to the web services API. The deploy_vm method will accept some arguments, provide others within it’s body and then instantiate the build_automation_request with body of the request. This includes things such as the template_name, vlan, vm_name, etc.

module DCA
  class Worker

    def build_automation_request(body_hash)
       # We will populate this with the request

    def deploy_vm(template_name, vlan, ip, subnet, gateway, vm_name, domain_name, memory, cpus, add_disk, owner_email, customization_spec = 'linux')
       # We will build the request here, then pass it to build_automation_request



With the structure in place we can add the following to the build_automation_request method. Replace YOURUSER with your username, YOURPASSWORD with your password and CFMEIPADDRESS with the IP address of the CloudForms Management Engine running with the web services role enabled.

    def build_automation_request(body_hash)
      client = Savon.client(basic_auth: ["YOURUSER", "YOURPASSWORD"], ssl_verify_mode: :none, ssl_version: :TLSv1, wsdl: "https://CFMEIPADDRESS/vmdbws/wsdl")
      evm_response =, message: body_hash)

Next we will populate the deploy_vm method. The contents of the method will build several arrays and then combine them into a hash which will be passed to the build_automation_request hash we created previously.

    def deploy_vm(template_name, vlan, ip, subnet, gateway, vm_name, domain_name, memory, cpus, add_disk, owner_email, customization_spec = 'linux')
      templateFields = []
      templateFields << "name=#{template_name}"
      templateFields << "request_type=template"
      vmFields = []
      vmFields << "vm_name=#{vm_name}"
      vmFields << "number_of_vms=1"
      vmFields << "vm_memory=#{memory}"
      vmFields << "number_of_cpus=#{cpus}"
#      The options below are useful for windows systems
#      options = []
#      options << "sysprep_custom_spec=#{customization_spec}"
#      options << "sysprep_spec_override=true"
#      options << "sysprep_domain_name=#{domain_name}"
      vmFields << "addr_mode=static"
      vmFields << "ip_addr=#{ip}"
      vmFields << "subnet_mask=#{subnet}"
      vmFields << "gateway=#{gateway}"
      vmFields << "vlan=#{vlan}"
      vmFields << "provision_type=PXE"
      requester = []
      #requester << "user_name=#{user_id}"
      requester << "owner_email=#{owner_email}"
      tags = []
      #options << "add_vdisk1=#{add_disk}"
      input =  {
          'version'        =>        '1.1',
          'templateFields'        =>        templateFields.join('|'),
          'vmFields'        =>        vmFields.join('|'),
          'requester'        =>        requester.join('|'),
          'tags'        =>        tags.join('|'),
          #'options'        =>        options
      pp input
      response = build_automation_request(input)
      pp response

Finally, we will create a new worker object by invoking the worker class and we will call the deploy_vm method associated with the worker object overriding the arguments that we wish to use.

w =
r = w.deploy_vm("win2k8tmpl", "VM Network", "", "", "",
                "VMNAME", "", 2048, 2, 15, "", "linux")
puts "Guess what! I built me a vm! #{r}"

That’s it. When this script is executed it should print out the output of your hash along with a bunch of output from the result of the request. If all goes well you should end up with a virtual machine running on your provider!

# chmod 755 myscript.rb
# ./myscript.rb
Guess what! I built me a vm!"

Keep in mind you should utilize more robust error handling if you are serious and also use something more secure for authentication between remote systems.

You can download the entire script here.

Building the Bridge Between Present and Future IT Architectures

Life isn’t easy for IT organizations today. They find themselves on the receiving end of demands for new capabilities that public cloud providers are delivering at increasing speed. While solutions within the datacenter are beginning to deliver these same capabilities in the private datacenter the IT organization doesn’t want to build yet another silo. Red Hat’s Open Hybrid Cloud Architecture is helping IT organizations adopt next generation IT architectures to meet the increasing demands for public cloud capability while helping them establish a common framework for all their IT assets. This approach provides a lot of benefits across all IT architectures. To name a few:

  • Discovery and Reporting: Detailed information about all workloads across all cloud and virtualization providers.
  • Self-Service: A single catalog which could provision services across hybrid and heterogeneous public and private clouds.
  • Best-Fit Placement: Helping identify which platform is best for which workload both at provision and run-time.

The engineers at Red Hat have been hard at work on the next release of CloudForms which is scheduled for General Availability later this year. I’ve been lucky enough to get my hands on a very early preview and wanted to share an update on two enhancements that are relevant to the topic of bridging present and future IT architectures. Before I dive into the enhancements let me get two pieces of background out of the way:

  1. Red Hat believes that the future IT architecture for Infrastructure as a Service (IaaS) is OpenStack. That shouldn’t come as a big surprise given that Red Hat was a major contributor to the Grizzly OpenStack Release and has established a community for it’s distribution called RDO.
  2. There is a big difference between datacenter virtualization and clouds and knowing which workloads should run on which is important. For more information on this you can watch Andy Cathrow’s talk at Red Hat Summit.

Two of the enhancements coming in the next release of CloudForms are the clear distinction between datacenter virtualization and cloud providers and the addition of OpenStack as a supported cloud provider.

In clearly separating and understanding the differences between datacenter virtualization (or infrastructure providers as it’s called in the user interface) and cloud providers CloudForms will understand exactly how to operationally manage and standardize operational concepts across Red Hat Enterprise Virtualization, VMware vSphere, Amazon EC2, and OpenStack.

Cloud Providers


Infrastructure (Datacenter Virtualization) Providers


Also, as you noticed in the previous screens CloudForms will support OpenStack as a cloud provider. This is critical to snapping in another piece of the puzzle of Red Hat’s Open Hybrid Cloud Architecture and providing all the operational management capabilities to OpenStack that IT organizations need.

OpenStack Cloud Provider


These two enhancements will be critical for organizations who want a single pane of glass to operationally manage their Open Hybrid Cloud.

Single Pane Operational Management of RHEV, vSphere, AWS EC2, and OpenStack


Stay tuned for more updates regarding the next release of CloudForms!


Get every new post delivered to your Inbox.

Join 719 other followers