Using a Remote ImageFactory with CloudForms

In Part 2 of Hands on with ManageIQ EVM I explored how ManageIQ and CloudForms could potentially be integrated in the future. One of the suggestions I had for the future was to allow imagefactory to run within the cloud resource provider (vsphere, RHEV, openstack, etc). This would simplify the architecture and require less infrastructure to host Cloud Engine on physical hardware. Requiring less infrastructure is important for a number of scenarios beyond just the workflow I explained in the earlier post. One scenario in particular is when one would want to provide demonstration environments of CloudForms to a large group of people – for example while training students on CloudForms.

Removing the physical hardware requirement for CloudForms Cloud Engine can be done in two ways. The first is by using nested virtualization. This is not yet available in Red Hat Enterprise Linux, but is available in the upstream – Fedora. The second is by running imagefactory remotely on a physical system and the rest of the component of CloudForms Cloud Engine within a virtual machine. In this post I’ll explore utilizing a physical system to host imagefactory and the modification necessary to a CloudForms Cloud Engine environment to make it happen.

How It Works

The diagram below illustrates the decoupling of imagefactory from conductor. Keep in mind, this is using CloudForms 1.1 on Red Hat Enterprise Linux 6.3.

Using a remote imagefactory with CloudForms

Using a remote imagefactory with CloudForms

1. The student executes a build action in their Cloud Engine. Each student has his/her own Cloud Engine and it is built on a virtual machine.

2. Conductor communicates with imagefactory on the physical cloud engine and instructs it to build the image. There is a single physical host acting as a shared imagefactory for every virtual machine hosting Cloud Engine for the students.

3. Imagefactory builds the image based on the content from virtual machines hosting CloudForms Cloud Engine.

4. Imagefactory stores the built images in the image warehouse (IWHD).

5. When the student wants to push that image to the provider, in this case RHEV they execute the action in Cloud Engine conductor.

6. Conductor communicates with imagefactory on the physical cloud engine and instructs it to push the image to the RHEV provider.

7. Imagefactory pulls the image from the warehouse (IWHD) and

8. pushes it to the provider.

9.  The student launches an application blueprint which contains the image.

10. Conductor communicates with deltacloud (dcloud) requesting that it launch the image on the provider.

11. Deltacloud (dcloud) communicates with the provider requesting that a virtual machine be created based on the template.

Configuration

Here are the steps you can follow to enable a single virtual machine hosting cloud engine to build images using a physical system’s imagefactory. These steps can be repeated and automated to stand up a large amount of virtual cloud engines that use a single imagefactory on a physical host. I don’t see any reason why you couldn’t use the RHEL host that acts as a hypervisor for RHEV or the RHEL host that acts as the export storage domain host. In fact, that might speed up performance. Anyway, here are the details.

1. Install CloudForms Cloud Engine on both the virtual-cloud-engine and physical-cloud-engine host.

2. Configure cloud engine on all the virtual-cloud-engine and physical-cloud-engine.

virtual-cloud-engine# aeolus-configure
physical-cloud-engine# aeolus-configure

3. On the virtual-cloud-engine configure RHEV as a provider.

virtual-cloud-engine# aeolus-configure -p rhevm

4. Copy the oauth information from the physical-cloud-engine to the virtual-cloud-engine.

virtual-cloud-engine# scp root@physical-cloud-engine:/etc/aeolus-conductor/oauth.json /etc/aeolus-conductor/oauth.json

5. Copy the settings for conductor from the physical-cloud-engine to the virtual-cloud-engine.

virtual-cloud-engine# scp root@physical-cloud-engine:/etc/aeolus-conductor/settings.yml /etc/aeolus-conductor/settings.yml

6.  Replace localhost with the IP address of physical-cloud-engine in the iwhd and imagefactory stanzas of /etc/aeolus-conductor/settings.yml on the virtual-cloud-engine.

7. Copy the rhevm.json file from the virtual-cloud-engine to the physical-cloud-engine.

physical-cloud-engine# scp root@virtual-cloud-engine:/etc/imagefactory/rhevm.json /etc/imagefactory/rhevm.json

8. Manually mount the RHEVM export domain listed in the rhevm.json file on the physical-cloud-engine.

physical-cloud-engine# mount nfs.server:/rhev/export /mnt/rhevm-nfs

9. After this is done, restart all the aeolus-services on both physical-cloud-engine and virtual-cloud-engine to make sure they are using the right configurations.

physical-cloud-engine# aeolus-restart-services
virtual-cloud-engine# aeolus-restart-services

Once this is complete, you should be able to build images on the remote imagefactory instance.

Multiple Cloud Engines sharing a single imagefactory

It should be noted that running a single imagefactory to support multiple Cloud Engine’s is not officially supported, and is probably not tested. In my experience, however, it seems to work. I hope to have time to post something with more details on the performance of utilizing a single imagefactory between multiple cloud engine’s performing concurrent build and push operations in the future.

2 thoughts on “Using a Remote ImageFactory with CloudForms

  1. This looks really good. On running image factory on the RHEV hypervisor host though, I may be out of date but I remember there being an issue with RHEV hypervisor and Cloud Engine coexisting in an allinone configuration because of conflicting use of KVM. Perhaps that necessitates an additional physical system in the RHEV provider for image factory? Even if so it would still look like a much more efficient architecture.

  2. Howdy! I’m at work surfing around your blog from my new apple iphone! Just wanted to say I love reading through your blog and look forward to all your posts! Keep up the excellent work!|

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: