Why Containers for OpenStack Services?

An interesting problem to solve in OpenStack is the management of OpenStack’s services. Whether it’s at the time of provisioning or updating, the OpenStack services could listen on similar ports and require modification of common configuration files.

Because of this, the services could potentially conflict with one another if deployed on the same system. For example, the network service may attempt to listen on the same port as the identity service or the compute service may edit a file that the network service expects to have different values. How do you deal with this problem, particularly when each OpenStack project has a tendency to work as an independent project? It doesn’t seem likely that it would be easy to drive consensus between the various projects on ports to listen on, configuration files to modify – particularly with the speed that OpenStack is moving.

For example, let’s suppose that one wants to deploy a network service. Assuming they are using a build based (sometimes referred to as package based) deployment method they might perform something similar to the following.

containersFTW02a

containersFTW02b

containersFTW02c

The result is a non-working network service and the potential for a non-working identity service if it is ever restarted. This problem is also found in image based deployment, it’s simply found earlier in the workflow, during the image generation phase. After all, the images that are being deployed need to be generated in the first place. The fundamental problem is that understanding what services are deployed on a particular host and resolving the dependencies or making necessary changes is not something the package or image generation tools understand.

One possible solution is to place each service on it’s own unique piece of hardware. This solves the problem of conflicts between the services configuration, but is not optimal as the overhead of the OpenStack services would not justify it’s own physical system until a particular scale is reached. Even then, locating the services close to compute nodes would also inhibit providing each service it’s own dedicated piece of hardware.

Another possible solution is to build into the tools, the logic and understanding of the OpenStack services and their configuration. While this sounds like a small task – it is not. The possible combinations of services that could be combined on a single host does not lend itself to easily creating let alone maintaining this logic.

OpenStack Architecture

OpenStack Architecture

Yet another possible solution is to utilize virtual machines. This solves the hardware problem and provides isolation, but it has some disadvantages. Virtual machines are heavyweight. Whether it’s building new virtual machine images because of a simple update or installing the configuration infrastructure necessary to update virtual machines (and the overhead of start/stop operations, less rich interfaces for metadata, etc) virtual machines are not ideal.

containersFTW04a

containersFTW04b

containersFTW04d

It may be possible to use Linux containers to solve this problem. Linux containers offer a lightweight virtualization that provide (among other things) process and network isolation. The isolation provided by containers means that tools such as a build based or image based deployment tools don’t need to maintain the logic of how the services on hosts could be deployed or updated without effecting one another.

containersFTW06a

I hope to provide more information soon on how projects like systemd might provide a mechanism for solving dependencies between OpenStack services running in containers – maybe even using Docker. Also, how ostree might lend a hand in some of the troubles of package management too.

Building Docker Images on Fedora

This page captures my effort to learn about docker images by building a docker image for ovirt-engine from scratch using Fedora 19. At this point I get stuck after launching the image with ovirt installed in it. I’ll be troubleshooting and seeing how I can best package ovirt-engine into a single image or breaking into multiple pieces. Who knows, maybe I’ll even try to make it communicate over etcd?

I was able to create a new base image, publish it to a private docker registry, then create a Dockerfile to create a layered image for ovirt-engine, the open source virtualization management platform. I used Marek Goldmann‘s great blog as a reference and leveraged the work of Matt Miller too.

Setup your System

On a Fedora 19 system install the necessary packages.

Install docker-io docker-registry. Docker automates deployment of containerized applications while docker-registry provides the docker registry server for sharing of docker images.
# yum install -y docker-io docker-registry --enablerepo=updates-testing

Install appliance-tools. Appliance tools is one method that can be used for creating a virtual machine that we will then package up into a docker image.
# yum install -y appliance-tools libguestfs-tools

Enable and start the docker and docker-registry services.
# systemctl enable docker
# systemctl start docker
# systemctl enable docker-registry
# systemctl start docker-registry

You may also want to unmount /tmp if you are running in a VM and have limited space in /tmp.
# systemctl mask tmp.mount; reboot

Build a Base Image

In order to build a base image you need to create a virtual machine image, then pack it up into an archive, and import it into docker.

You can use your favorite kickstart file for your base docker image. You would want to make the kickstart install the smallest possible footprint so your base image stays small. The following example kickstart is a good starting point.

appliance-creator can be used to automatically install a virtual machine using the kickstart file.
# appliance-creator -c mykickstart.ks -d -v -t /tmp \
-o /tmp/myimage --name "fedora-image" --release 19 \
--format=qcow2;

virt-tar-out creates a tar file from a virtual machine image.
# virt-tar-out -a /tmp/myimage/fedora-image/fedora-image-sda.qcow2 / - |
docker import - jlabocki f19

You can download the buildcontainers.sh script and container-small-19.ks kickstart which will help you with automating the building of a basic container.

If you have issues creating a container you can continue on by pulling an existing image, like Matt’s fedora image, from the Docker index.
# docker pull mattdm/fedora

Publish the New Image to a Docker Registry

Docker provides a registry, a place to store your docker images (web server that supports multiple storage back-ends and has hooks for authentication sources). The company behind docker provides an index which is the docker-registry combined with a web front end and collaborative environment.

Now that we have a docker image we can upload it to our private registry. First you’ll need to list the images, tag the image
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
none latest e4a4f6d69590 29 hours ago 131.2 MB

# docker tag e4a4f6d69590 localhost.localdomain:5000/fedora-small
# docker push localhost.localdomain:5000/fedora-small

Create a New Dockerfile

Now let’s try to create a new image based on the base image. We will create a new directory and create a dockerfile.
# mkdir ovirt; cd ovirt; vi Dockerfile

A dockerfile accepts a bunch of options. We will use only a few in ours.

# Base on the Fedora image created by Matthew
FROM localhost.localdomain:5000/fedora-small

# Install the JBoss Application Server 7
#RUN yum install -y jboss-as
RUN yum localinstall -y http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm
RUN yum install -y ovirt-engine
RUN yum install -y ovirt-engine-setup-plugin-allinone
RUN yum install -y wget
#RUN wget http://10.16.132.12/pub/answerfile -O /root/answerfile

# Run the JBoss AS after the container boots
# ENTRYPOINT /usr/bin/engine-setup --config=/root/answerfile

The FROM line indicates what base image should be used.
The RUN lines will be executed and committed on the image.
The ENTRYPOINT line specifies what should be executed when the image is launched. At this point I’ll leave the ENTRYPOINT commented out. We’ll just launch a shell and then try to execute the engine-setup command before we use an answerfile to install it automatically in a future image.

Now we will build our image.
# docker build .

Now we have a new image.
# docker images

We can tag this new image.
# docker tag 234ad73r7df localhost.localdomain:5000/ovirt-fedora-small

And we can push it to our registry as a new image.
# docker push localhost.localdomain:5000/ovirt-fedora-small

On another fedora 19 system with docker installed (or on the same one), you can pull the docker image down and run it.
# docker pull youripaddress:5000/ovirt-fedora-small
....
# docker run -i -t localhost.localdomain:5000/jlabocki/fedora-ovirt-small /bin/bash

You can run `docker help run` to understand the options that we just gave to run the image. You can also inspect the images and running containers to get lots of interesting information about it (from outside the container, not from within it). `docker ps a` will list the running containers while `docker images` will list the images you have.

From within the container, let’s try to run the engine-setup command and see how far we get …
bash-4.2# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219122235.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ ERROR ] Failed to execute stage 'Environment setup': Command 'initctl' is required but missing
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219122235.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed

It looks like the ovirt-engine is looking for initctl, or at least that is the error it is throwing. Let’s see if we can fool the engine-setup command into thinking it exists.

# ln -s /usr/sbin/service /usr/bin/initctl

Re-running engine-setup
bash-4.2# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219122459.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
Disabling all-in-one plugin because hardware supporting virtualization could not be detected. Do you want to continue setup without AIO plugin? (Yes, No) [No]: Yes

--== PACKAGES ==--

[ INFO ] Checking for product updates...
[ INFO ] No product updates found

--== ALL IN ONE CONFIGURATION ==--

--== NETWORK CONFIGURATION ==--

Host fully qualified DNS name of this server [502fbe26fc3c]:
[WARNING] Host name 502fbe26fc3c has no domain suffix
[WARNING] Failed to resolve 502fbe26fc3c using DNS, it can be resolved only locally

--== DATABASE CONFIGURATION ==--

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

--== OVIRT ENGINE CONFIGURATION ==--

Engine admin password:
Confirm engine admin password:
[ ERROR ] Failed to execute stage 'Environment customization': [Errno 2] No such file or directory: '/usr/share/cracklib/pw_dict.pwd'
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219122459.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed

Here is the output of ovirt-engine-setup-2013.log. It appears that I’m missing some files for password. It turns out the password dictionary file was missing, but it was just compressed. Let’s uncompress it and see if we can re-run engine-setup.
bash-4.2# gzip -d /usr/share/cracklib/pw_dict.pwd.gz
bash-4.2# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219125142.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
Disabling all-in-one plugin because hardware supporting virtualization could not be detected. Do you want to continue setup without AIO plugin? (Yes, No) [No]: Yes

--== PACKAGES ==--

[ INFO ] Checking for product updates...
[ INFO ] No product updates found

--== ALL IN ONE CONFIGURATION ==--

--== NETWORK CONFIGURATION ==--

Host fully qualified DNS name of this server [502fbe26fc3c]:
[WARNING] Host name 502fbe26fc3c has no domain suffix
[WARNING] Failed to resolve 502fbe26fc3c using DNS, it can be resolved only locally

--== DATABASE CONFIGURATION ==--

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

--== OVIRT ENGINE CONFIGURATION ==--

Engine admin password:
Confirm engine admin password:
[WARNING] Password is weak: it is based on a dictionary word
Use weak password? (Yes, No) [No]: Yes
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

--== PKI CONFIGURATION ==--

Organization name for certificate [Test]:

--== APACHE CONFIGURATION ==--

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

--== SYSTEM CONFIGURATION ==--

--== END OF CONFIGURATION ==--

[ INFO ] Stage: Setup validation
[ ERROR ] Failed to execute stage 'Setup validation': Database configuration was requested, however, postgresql service was not found. This may happen because postgresql database is not installed on system.
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131219125142.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed

Conclusion

At this point the engine-setup command is not able to complete successfully because of a dbus error when trying to initialize postgresl-server. I’ll continue to work on this to see if I can make progress in packaging ovirt-engine into a docker image.

OpenStack Summit Hong Kong Presentation and Demonstrations

Oleg and I will be presenting these slides and videos demonstrating CloudForms OpenStack support today at the Hong Kong Summit:

  • Adding OpenStack as a provider OGG MOV
  • Reporting on OpenStack OGG MOV
  • Chargeback for OpenStack OGG MOV
  • Self-Service of OpenStack Instances OGG MOV

Setting up RHELOSP and Ceilometer for use with CloudForms 3

This assumes a RHEL6.4 @base installation of Red Hat Enterprise Linux OpenStack Platform (RHELOSP) and registration to a satellite which has access to both RHELOSP Channels and RHEL Server Optional. Much of the Ceilometer installation instructions came from this Fedora QA Test case, but I made a few changes and added a few more details.

Setup a RHELOSP system and configure ceilometer.


# sudo rhn-channel --add -c rhel-x86_64-server-6-ost-3 -c rhel-x86_64-server-optional-6
# sudo yum update -y
# sudo reboot


sudo yum install *ceilometer*

The mongoDB store also must be installed and started:


sudo yum install mongodb-server
sudo sed -i '/--smallfiles/!s/OPTIONS=\"/OPTIONS=\"--smallfiles /' /etc/sysconfig/mongod
sudo service mongod start

Create the appropriate users and roles:


SERVICE_TENANT=$(keystone tenant-list | grep services | awk '{print $2}')
ADMIN_ROLE=$(keystone role-list | grep ' admin ' | awk '{print $2}')
SERVICE_PASSWORD=servicepass
CEILOMETER_USER=$(keystone user-create --name=ceilometer \
--pass="$SERVICE_PASSWORD" \
--tenant_id $SERVICE_TENANT \
--email=ceilometer@example.com | awk '/ id / {print $4}')
RESELLER_ROLE=$(keystone role-create --name=ResellerAdmin | awk '/ id / {print $4}')
ADMIN_ROLE=$(keystone role-list | awk '/ admin / {print $2}')
for role in $RESELLER_ROLE $ADMIN_ROLE ; do
keystone user-role-add --tenant_id $SERVICE_TENANT \
--user_id $CEILOMETER_USER --role_id $role
done

Set the authtoken config appropriately in the ceilometer config file:

sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_host 127.0.0.1
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_port 35357
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_protocol http
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name services
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer
sudo openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password $SERVICE_PASSWORD

Set the user credentials config appropriately in the ceilometer config file:


sudo openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_auth_url http://127.0.0.1:35357/v2.0
sudo openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_tenant_name services
sudo openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_password $SERVICE_PASSWORD
sudo openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT os_username ceilometer

Then start the services:


for svc in compute central collector api ; do
sudo service openstack-ceilometer-$svc start
done

Finally, register an appropriate endpoint with the service catalog. Be sure to replace $EXTERNALIFACE with the IP address of your external interface.


keystone service-create --name=ceilometer \
--type=metering --description="Ceilometer Service"
CEILOMETER_SERVICE=$(keystone service-list | awk '/ceilometer/ {print $2}')
keystone endpoint-create \
--region RegionOne \
--service_id $CEILOMETER_SERVICE \
--publicurl "http://$EXTERNALIFACE:8777/" \
--adminurl "http://$EXTERNALIFACE:8777/" \
--internalurl "http://localhost:8777/"


# sudo iptables -A INPUT -p tcp -m multiport --dports 8777 -m comment --comment "001 ceilometer incoming" -j ACCEPT
# sudo iptables save


# openstack-status
# for svc in compute central collector api ; do
sudo service openstack-ceilometer-$svc status
done

At this point you can verify ceilometer is working correctly by authenticating as a user that has instances running (such as admin).


# . ~/keystonerc_admin

Then list the sample for the cpu meter. I pipe this to count lines below and just check to see that the value changes every few minutes depending on what is specified in the /etc/ceilometer/pipeline.yaml (the interval is 600 seconds by default)


ceilometer sample-list -m cpu |wc -l

Add the provider to CloudForms Management Engine and you begin seeing capacity and utilization data for your instances populate in a few minutes.

OpenStack Support Included in Arrival of CloudForms 3

CloudForms 3 has arrived! There are plenty of new features, including deeper integration with Amazon Web Services EC2 and enhanced service catalog definitions. Along with those, one major new capability is support of OpenStack as a cloud provider. This is a big step forward in bringing the same cloud management capabilities users have come to expect from CloudForms across VMware vSphere, AWS EC2, and Red Hat Enterprise Virtualization to OpenStack. Before diving directly to the capabilities CloudForms provides for OpenStack providers it’s important to know that Red Hat is working on enabling OpenStack for Enterprises in a number of ways. Here are three key areas:

Enabling Red Hat Enterprise Linux to be the most stable, secure, and best performing platform for OpenStack powered clouds.
This is being accomplished in Red Hat Enterprise Linux OpenStack Platform – a stable, reliable, and secure base, and the hardware and application support needed to run in demanding OpenStack environments.

Enabling instrumentation and APIs within OpenStack.
This occurs upstream within the OpenStack project itself. Red Hat works with the community on projects such as TripleO, an installation and operations tool for OpenStack. It has also led by initiating Tuskar – a stateful API and UI for managing the deployment of OpenStack, which is now a part of the TripleO project.

Supporting OpenStack within CloudForms Red Hat’s Hybrid Cloud Management Platform.
Most IT organizations are already virtualizing and building cloud like capabilities on top of datacenter virtualization (self-service, chargeback, etc). These organizations recognize that building a private cloud using OpenStack will provide new advantages such as reducing costs, increasing scale, and fundamentally changing the way developers and operations teams work together. However, IT organizations don’t want to build yet another silo. They’d like to solve the fundamental problem of IT complexity while simultaneously building their next generation IT architecture. CloudForms allows organizations to operationally manage their existing platforms alongside their next generation IT architectures, including OpenStack.

With the OpenStack management background out of the way let’s look at some highlights of what CloudForms 3 brings to OpenStack management in more detail.

Manage New and Existing OpenStack Clouds

CloudForms 3 allows users to manage new and existing OpenStack Environments. As I mentioned in an earlier post infrastructure providers such as VMware and Red Hat Enterprise Virtualization (RHEV) have been separated from Cloud Providers such as Amazon Web Services and OpenStack within the user interface. Within the Cloud Providers screen it’s possible to add a new cloud provider.

addprovider001

After providing the credentials of an OpenStack keystone user CloudForms 3 will discover the Availability Zones, Flavors, Security Groups, Instances, and Images associated with the OpenStack user.

addprovider002

Each of these discovered properties of the OpenStack provider can be inspected further. With instances in particular, the CloudForms user can begin viewing in depth information about the instances running on top of OpenStack.

addprovider002

Users can dive into capacity and utilization data of their openstack instances.

addprovider002

Since CloudForms is also pulling events from the OpenStack message bus it is possible to correlate performance information on instances with events that are taking place.

addprovider002

All of this performance and utilization data is also available for reporting purposes in the CloudForms reporting engine.

Chargeback for Workloads on OpenStack

CloudForms 3 adds OpenStack to a growing list of providers for which chargeback reports can be centrally managed. Using the rate table and tagging functions that already exist in CloudForms users can create rate tables and assign them to their OpenStack environments.

chargeback002

chargeback002

The tagging system continues to provide a flexible and dynamic approach to chargeback which is becoming even more critical as IT organizations build more dynamic platforms with higher rates of change. Chargeback reports can be limited to only show instances or can be combined with virtual machine chargeback.

chargeback002

Provision workloads via self-service catalogs to OpenStack clouds

Finally, CloudForms 3 provides access to instances in OpenStack providers via self-service in it’s service catalog. While self-service of images is a native feature of Horizon within Red Hat Enterprise Linux OpenStack Platform the inclusion of self-service via CloudForms helps organizations looking to implement enterprise class self-service that ties into their existing environments. CloudForms self-service capabilities are integrated with it’s automation engine which bring capabilities such as the abilities to:

  • Combine multiple instances or combined instances with virtual machines and other atomic services into a single service catalog bundle for ordering
  • Integrate with existing IT Operations Management solutions, such as CMDBs, CMS, monitoring, or eventing tools
  • Enforce quotas, workflow, and approval
  • Provide best fit placement of instances on particular OpenStack providers

selfservice001

selfservice002

CloudForms 3 is a big step forward for enterprises looking to manage their OpenStack private clouds through a cloud management platform that also supports their existing investments in datacenter virtualization and public clouds. If you are attending OpenStack Summit I hope you can join Oleg Barenboim, Senior Director of Software Engineering for CloudForms, and myself as we present on how CloudForms Unifies the management of OpenStack, Datacenter Virtualization, and Public Clouds.

Deploying OpenShift with CloudForms Presentation

Slides from my talk on Deploying OpenShift with CloudForms can be downloaded here.

Follow

Get every new post delivered to your Inbox.

Join 821 other followers