Category Archives: IaaS

Hybrid Service Models: IaaS and PaaS Self-Service Converge

More and more organizations are beginning to embrace both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS).  These organizations  have already begun asking why PaaS and IaaS management facilities must use different management frameworks. It only seems natural that IT organization’s customers should be able to select both IaaS and PaaS elements during their self-service workflow. Likewise, operations teams within IT organizations prefer to be able to utilize the same methods of policy, control, and automation across both IaaS and PaaS elements. In doing so operations teams could optimize workload placement both inside and outside their datacenter and reduce duplication of effort. This isn’t just a realization that customers are coming to – analysts have also been talking about the convergence of IaaS and PaaS as a natural evolutionary step in cloud computing.

Converged IaaS and PaaS

Converged IaaS and PaaS

This convergence of IaaS and PaaS is something I referred to as a Hybrid Service Model in a previous post, but you may often hear it refereed to as Converged IaaS and PaaS. There are many detriments an IT organization that does not embrace the convergence of IaaS and PaaS will face. Some of the more notable detriments include the following.

  • Developers: Slower delivery of services
    • Developers accessing two self-service portals in which the portals do not have knowledge of each others capabilities leads to slower development and greater risk of human error due to less automated processes on workload provisioning and management.
  • Operations: Less efficient use of resources
    • Operations teams managing IaaS and PaaS with two management facilities will be unable to maximize utilization of resources.
  • Management: Loss of business value
    • IT managers will be unable to capitalize efficiently without an understanding of both IaaS and PaaS models.

For these reasons and many more, it’s imperative that organizations make decisions today that will lead them to the realization of a Hybrid Service Model. There are two approaches emerging in the industry to realizing a Hybrid Service Model. The first approach is to build a completely closed or semi-open solution to allowing for a Hybrid Service Model. A good example would be a vendor offering a PaaS as long as it runs on top of a single virtualization provider (conveniently sold by them). The second approach is one in which a technology company utilizes an approach based on the tenants of an Open Hybrid Cloud to provide a fully open solution to enabling a Hybrid Service Model. I won’t go into all the reasons the second approach is better – you can read about that more here and here – but I will mention that Red Hat is committed to the Open Hybrid Cloud approach to enabling a Hybrid Service Model.

With all the background information out of the way I’d like to show you a glimpse of what will be possible due to the Open Hybrid Cloud approach at Red Hat. Red Hat is building the foundation to offer customers Hybrid Service Models alongside Hybrid Deployment Scenarios. This is possible for many reasons, but in this scenario it is primarily because of the open APIs available in OpenShift, Red Hat’s PaaS and because of the extensibility of CloudForms, Red Hat’s Hybrid Cloud Management solution. The next release of CloudForms will include a Management Engine component, based on the acquisition of ManageIQ EVM that occurred in December. Using the CloudForms Management Engine it is possible to provide self-service of applications in a PaaS along with self-service of infrastructure in IaaS from a single catalog. Here is what a possible workflow would look like.

Higher resolution viewing in quicktime format here.

Self-Service OpenShift Enterprise Deployments with ManageIQ ECM

In the previous post I examined how Red Hat Network (RHN) Satellite could be integrated with ManageIQ Enterprise Cloud Management (ECM). With this integration in place Satellite could provide ECM with the content required to install an operating system into a virtual machine and close the loop in ongoing systems management. This was just a first look and there is a lot of work to be done to enable discovery of RHN Satellite and best practice automation out of the box via ECM. That said, the combination of ECM and RHN Satellite provide a solid foundation for proceeding to use cases higher in the stack.

With this in mind, I decided to attempt automating a self-service deployment of OpenShift using ManageIQ ECM, RHN Satellite, and puppet.

Lucky for me, much of the heavy lifting had already been done by Krishna Raman and others who developed puppet modules for installing OpenShift Origin. There were several hurdles that had to be overcome with the existing puppet modules for my use case:

  1. They were built for Fedora and OpenShift Origin and I am using RHEL6 with OpenShift Enterprise. Because of this they defaulted to using newer rubygems that weren’t available in openshift enterprise yet. It took a little time to reverse engineer the puppet modules to understand exactly what they were doing and tweak them for OpenShift Enterprise.
  2. The OpenShift Origin puppet module leveraged some other puppet modules (stdlib, for example), so the puppet module tool (PMT) was needed which is not available in core puppet until > 2.7. Of course, the only version of puppet available in EPEL for RHEL 6 was puppet-2.6. I pulled an internal build of puppet-2.7 to get around this, but still required some packages from EPEL to solve dependencies.

Other then that, I was able to reuse much of what already existed and deploy OpenShift Enterprise via ManageIQ ECM. How does it work? Very similar to the Satellite use case, but with the added step of deploying puppet and a puppet master onto the deployed virtual machine and executing the puppet modules.

workflow

Workflow of OpenShift Enterprise deployment via ECM

If you are curious how the puppet modules work, here is a diagram that illustrates the flow of the openshift puppet module.

Anatomy of OpenShift Puppet Module

Anatomy of OpenShift Puppet Module

Here is a screencast of the self-service deployment in action.

There are a lot of areas that can be improved in the future. Here are four which were top of mind after this exercise.

First, runtime parameters should be able to be passed to the deployment of virtual machines. These parameters should ultimately be part of a service that could be composed into a deployment. One idea would be to expose puppet classes as services that could be added to a deployment. For example, layering a service of openshift_broker onto a virtual machine would instantiate the openshift_broker class on that machine upon deployment. The parameters required for openshift_broker would then be exposed to the user if they would like to customize them.

Second, gears within OpenShift – the execution area for applications – should be able to be monitored from ECM much like Virtual Machines are today. The oo-stats package provides some insight into what is running in an OpenShift environment, but more granular details could be exposed in the future. Statistics such as I/O, throughput, sessions, and more would allow ECM to further manage OpenShift in enterprise deployments and in highly dynamic environments or where elasticity of the PaaS substrate itself is a design requirement.

Third, building an upstream library of automation actions for ManageIQ ECM so that these exercises could be saved and reused in the future would be valuable. While I only focused on a simple VM deployment in this scenario, in the future I plan to use ECM’s tagging and Event, Condition, Action construct to register Brokers and Nodes to a central puppet master (possibly via Foreman). The thought is that once automatically tagged by ECM with a “Broker” or “Node” tag an action could be taken by ECM to register the systems to the puppet master which would then configure the system appropriately. All those automation actions are exportable, but no central library exists for these at the current time to promote sharing.

Fourth, and possibly most exciting, would be the ability to request applications from OpenShift via ECM alongside requests for virtual machines. This ability would lead to the realization of a hybrid service model. As far as I’m aware, this is not provided by any other vendor in the industry. Many of the analysts are coming around to the fact that the line between IaaS and PaaS will soon be gray. Driving the ability to select an application that is PaaS friendly (python for example) and traditional infrastructure components (a relational database for example) from a single catalog would provide a simplified user experience and create new opportunities for operations to drive even higher utilization at lower costs.

I hope you found this information useful. As always, if you have feedback, please leave a comment!

Using RHN Satellite with ManageIQ ECM

Many organizations use Red Hat Network (RHN) Satellite to manage their Red Hat Enterprise Linux systems. RHN Satellite has a long and successful history of providing update, configuration, and subscription management for RHEL in the physical and virtualized datacenter. As these organizations move to a cloud model, they require other functions in addition to systems management. Capabilities such as discovery, chargeback, compliance, monitoring, and policy enforcement are important aspects of cloud models. ManageIQ’s Enterprise Cloud Management, recently acquired by Red Hat, provides these capabilities to customers.

One of the benefits of an Open Hybrid Cloud is that organizations can leverage their existing investments and gain the benefits of cloud across them. How then, could organizations gain the benefits of cloud computing while leveraging their existing investment in systems management? In this post, I’ll examine how Red Hat Network Satellite can be utilized with ManageIQ ECM to demonstrate the evolutionary approach that an Open Hybrid Cloud provides.

Here is an overview of the workflow.

RHN Satellite and ManageIQ ECM Workflow

RHN Satellite and ManageIQ ECM Workflow

  1. The operations user needs to transfer the kickstart files into customization templates in ManageIQ ECM. This is literally copying and pasting the kickstart files. It’s important to change the “reboot” option to “poweroff” in the kickstart file. If this is isn’t done, the VM will be rebooted and continually loop into an installation via PXE. Also, in the %post section of the kickstart you need to include “wget –no-check-certificate <%= evm[:callback_url_on_post_install] %>”. This will allow ECM to understand that the system has finished building and boot the VM after it has shutoff.
  2. The user requests virtual machine(s) from ECM.
  3. ECM creates an entry in the PXE environment and creates a new virtual machine from the template selected by the user.
  4. The virtual machine boots from the network and the PXE server loads the appropriate kickstart file.
  5. The virtual machine’s operating system is installed from the content in RHN Satellite.
  6. The virtual machine is registered to RHN Satellite for ongoing management.
  7. The user (or operations users) can now manage the operating system via RHN Satellite.

Here is a screencast of this workflow in action.

There are a lot of areas that can be improved upon.

  1. Utilize the RHN Satellite XMLRPC API to delete the system from RHN Satellite.
  2. Allow for automatic discovery of kickstarts in RHN Satellite from ECM.
  3. Unify the hostnames deployed to RHEVM with their matching DNS entries, so they appear the same in RHN Satellite.

Automating OpenStack deployments with Packstack

If you’d like a method to consistently deploy OpenStack in an automated fashion I’d recommend checking out packstack – an Openstack installation automation tool which utilizes puppet.

[root@rhc-05 ~]# packstack
Welcome to Installer setup utility
Should Packstack install Glance image service [y|n]  [y] : 
Should Packstack install Cinder volume service [y|n]  [y] : 
Should Packstack install Nova compute service [y|n]  [y] : 
Should Packstack install Horizon dashboard [y|n]  [y] : 
Should Packstack install Swift object storage [y|n]  [n] : y
Should Packstack install Openstack client tools [y|n]  [y] : 
Enter list of NTP server(s). Leave plain if packstack should not install ntpd on instances.: ns1.bos.redhat.com
Enter the path to your ssh Public key to install on servers  [/root/.ssh/id_rsa.pub] : 
Enter the IP address of the MySQL server  [10.16.46.104] : 
Enter the password for the MySQL admin user :
Enter the IP address of the QPID service  [10.16.46.104] : 
Enter the IP address of the Keystone server  [10.16.46.104] : 
Enter the IP address of the Glance server  [10.16.46.104] : 
Enter the IP address of the Cinder server  [10.16.46.104] : 
Enter the IP address of the Nova API service  [10.16.46.104] : 
Enter the IP address of the Nova Cert service  [10.16.46.104] : 
Enter the IP address of the Nova VNC proxy  [10.16.46.104] : 
Enter a comma separated list of IP addresses on which to install the Nova Compute services  [10.16.46.104] : 10.16.46.104, 10.16.46.106
Enter the Private interface for Flat DHCP on the Nova compute servers  [eth1] : 
Enter the IP address of the Nova Network service  [10.16.46.104] : 
Enter the Public interface on the Nova network server  [eth0] : 
Enter the Private interface for Flat DHCP on the Nova network server  [eth1] : 
Enter the IP Range for Flat DHCP  [192.168.32.0/22] : 
Enter the IP Range for Floating IP's  [10.3.4.0/22] : 
Enter the IP address of the Nova Scheduler service  [10.16.46.104] : 
Enter the IP address of the client server  [10.16.46.104] : 
Enter the IP address of the Horizon server  [10.16.46.104] : 
Enter the IP address of the Swift proxy service  [10.16.46.104] : 
Enter the Swift Storage servers e.g. host/dev,host/dev  [10.16.46.104] : 
Enter the number of swift storage zones, MUST be no bigger than the number of storage devices configured  [1] : 
Enter the number of swift storage replicas, MUST be no bigger than the number of storage zones configured  [1] : 
Enter FileSystem type for storage nodes [xfs|ext4]  [ext4] : 
Should packstack install EPEL on each server [y|n]  [n] : 
Enter a comma separated list of URLs to any additional yum repositories to install:    
To subscribe each server to Red Hat enter a username here: james.labocki
To subscribe each server to Red Hat enter your password here :

Installer will be installed using the following configuration:
==============================================================
os-glance-install:             y
os-cinder-install:             y
os-nova-install:               y
os-horizon-install:            y
os-swift-install:              y
os-client-install:             y
ntp-severs:                    ns1.bos.redhat.com
ssh-public-key:                /root/.ssh/id_rsa.pub
mysql-host:                    10.16.46.104
mysql-pw:                      ********
qpid-host:                     10.16.46.104
keystone-host:                 10.16.46.104
glance-host:                   10.16.46.104
cinder-host:                   10.16.46.104
novaapi-host:                  10.16.46.104
novacert-host:                 10.16.46.104
novavncproxy-hosts:            10.16.46.104
novacompute-hosts:             10.16.46.104, 10.16.46.106
novacompute-privif:            eth1
novanetwork-host:              10.16.46.104
novanetwork-pubif:             eth0
novanetwork-privif:            eth1
novanetwork-fixed-range:       192.168.32.0/22
novanetwork-floating-range:    10.3.4.0/22
novasched-host:                10.16.46.104
osclient-host:                 10.16.46.104
os-horizon-host:               10.16.46.104
os-swift-proxy:                10.16.46.104
os-swift-storage:              10.16.46.104
os-swift-storage-zones:        1
os-swift-storage-replicas:     1
os-swift-storage-fstype:       ext4
use-epel:                      n
additional-repo:               
rh-username:                   james.labocki
rh-password:                   ********
Proceed with the configuration listed above? (yes|no): yes

Installing:
Clean Up...                                              [ DONE ]
OS support check...                                      [ DONE ]
Running Pre install scripts...                           [ DONE ]
Installing time synchronization via NTP...               [ DONE ]
Setting Up ssh keys...                                   [ DONE ]
Create MySQL Manifest...                                 [ DONE ]
Creating QPID Manifest...                                [ DONE ]
Creating Keystone Manifest...                            [ DONE ]
Adding Glance Keystone Manifest entries...               [ DONE ]
Creating Galnce Manifest...                              [ DONE ]
Adding Cinder Keystone Manifest entries...               [ DONE ]
Checking if the Cinder server has a cinder-volumes vg... [ DONE ]
Creating Cinder Manifest...                              [ DONE ]
Adding Nova API Manifest entries...                      [ DONE ]
Adding Nova Keystone Manifest entries...                 [ DONE ]
Adding Nova Cert Manifest entries...                     [ DONE ]
Adding Nova Compute Manifest entries...                  [ DONE ]
Adding Nova Network Manifest entries...                  [ DONE ]
Adding Nova Scheduler Manifest entries...                [ DONE ]
Adding Nova VNC Proxy Manifest entries...                [ DONE ]
Adding Nova Common Manifest entries...                   [ DONE ]
Creating OS Client Manifest...                           [ DONE ]
Creating OS Horizon Manifest...                          [ DONE ]
Preparing Servers...                                     [ DONE ]
Running Post install scripts...                          [ DONE ]
Installing Puppet...                                     [ DONE ]
Copying Puppet modules/manifests...                      [ DONE ]
Applying Puppet manifests...
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_prescript.pp
Testing if puppet apply is finished : 10.16.46.104_prescript.pp OK
Testing if puppet apply is finished : 10.16.46.104_mysql.pp OK
Testing if puppet apply is finished : 10.16.46.104_qpid.pp OK
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_keystone.pp
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_glance.pp
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_cinder.pp
Testing if puppet apply is finished : 10.16.46.104_keystone.pp OK
Testing if puppet apply is finished : 10.16.46.104_cinder.pp OK
Testing if puppet apply is finished : 10.16.46.104_glance.pp OK
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_api_nova.pp
Testing if puppet apply is finished : 10.16.46.104_api_nova.pp OK
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_nova.pp
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_osclient.pp
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_horizon.pp
Testing if puppet apply is finished : 10.16.46.104_nova.pp OK
Testing if puppet apply is finished : 10.16.46.104_osclient.pp OK
Testing if puppet apply is finished : 10.16.46.104_horizon.pp OK
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_postscript.pp
Testing if puppet apply is finished : 10.16.46.104_postscript.pp
Testing if puppet apply is finished : 10.16.46.104_postscript.pp OK
                            [ DONE ]

 **** Installation completed successfully ******

     (Please allow Installer a few moments to start up.....)

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * To use the command line tools source the file /root/keystonerc_admin created on 10.16.46.104
 * To use the console, browse to http://10.16.46.104/dashboard
 * The installation log file is available at: /var/tmp/packstack/20130205-0955/openstack-setup.log

You can also generate an answer file and use it on other systems

[root@rhc-06 ~]# packstack --gen-answer-file=answerfile

Be careful, if your system is subscribed to Red Hat Network’s classic entitlement packstack will register the systems via subscription-manager (certificate based entitlement). This can cause some issues if you already subscribed the system and added the OpenStack  channels via RHN.

Using a Remote ImageFactory with CloudForms

In Part 2 of Hands on with ManageIQ EVM I explored how ManageIQ and CloudForms could potentially be integrated in the future. One of the suggestions I had for the future was to allow imagefactory to run within the cloud resource provider (vsphere, RHEV, openstack, etc). This would simplify the architecture and require less infrastructure to host Cloud Engine on physical hardware. Requiring less infrastructure is important for a number of scenarios beyond just the workflow I explained in the earlier post. One scenario in particular is when one would want to provide demonstration environments of CloudForms to a large group of people – for example while training students on CloudForms.

Removing the physical hardware requirement for CloudForms Cloud Engine can be done in two ways. The first is by using nested virtualization. This is not yet available in Red Hat Enterprise Linux, but is available in the upstream – Fedora. The second is by running imagefactory remotely on a physical system and the rest of the component of CloudForms Cloud Engine within a virtual machine. In this post I’ll explore utilizing a physical system to host imagefactory and the modification necessary to a CloudForms Cloud Engine environment to make it happen.

How It Works

The diagram below illustrates the decoupling of imagefactory from conductor. Keep in mind, this is using CloudForms 1.1 on Red Hat Enterprise Linux 6.3.

Using a remote imagefactory with CloudForms

Using a remote imagefactory with CloudForms

1. The student executes a build action in their Cloud Engine. Each student has his/her own Cloud Engine and it is built on a virtual machine.

2. Conductor communicates with imagefactory on the physical cloud engine and instructs it to build the image. There is a single physical host acting as a shared imagefactory for every virtual machine hosting Cloud Engine for the students.

3. Imagefactory builds the image based on the content from virtual machines hosting CloudForms Cloud Engine.

4. Imagefactory stores the built images in the image warehouse (IWHD).

5. When the student wants to push that image to the provider, in this case RHEV they execute the action in Cloud Engine conductor.

6. Conductor communicates with imagefactory on the physical cloud engine and instructs it to push the image to the RHEV provider.

7. Imagefactory pulls the image from the warehouse (IWHD) and

8. pushes it to the provider.

9.  The student launches an application blueprint which contains the image.

10. Conductor communicates with deltacloud (dcloud) requesting that it launch the image on the provider.

11. Deltacloud (dcloud) communicates with the provider requesting that a virtual machine be created based on the template.

Configuration

Here are the steps you can follow to enable a single virtual machine hosting cloud engine to build images using a physical system’s imagefactory. These steps can be repeated and automated to stand up a large amount of virtual cloud engines that use a single imagefactory on a physical host. I don’t see any reason why you couldn’t use the RHEL host that acts as a hypervisor for RHEV or the RHEL host that acts as the export storage domain host. In fact, that might speed up performance. Anyway, here are the details.

1. Install CloudForms Cloud Engine on both the virtual-cloud-engine and physical-cloud-engine host.

2. Configure cloud engine on all the virtual-cloud-engine and physical-cloud-engine.

virtual-cloud-engine# aeolus-configure
physical-cloud-engine# aeolus-configure

3. On the virtual-cloud-engine configure RHEV as a provider.

virtual-cloud-engine# aeolus-configure -p rhevm

4. Copy the oauth information from the physical-cloud-engine to the virtual-cloud-engine.

virtual-cloud-engine# scp root@physical-cloud-engine:/etc/aeolus-conductor/oauth.json /etc/aeolus-conductor/oauth.json

5. Copy the settings for conductor from the physical-cloud-engine to the virtual-cloud-engine.

virtual-cloud-engine# scp root@physical-cloud-engine:/etc/aeolus-conductor/settings.yml /etc/aeolus-conductor/settings.yml

6.  Replace localhost with the IP address of physical-cloud-engine in the iwhd and imagefactory stanzas of /etc/aeolus-conductor/settings.yml on the virtual-cloud-engine.

7. Copy the rhevm.json file from the virtual-cloud-engine to the physical-cloud-engine.

physical-cloud-engine# scp root@virtual-cloud-engine:/etc/imagefactory/rhevm.json /etc/imagefactory/rhevm.json

8. Manually mount the RHEVM export domain listed in the rhevm.json file on the physical-cloud-engine.

physical-cloud-engine# mount nfs.server:/rhev/export /mnt/rhevm-nfs

9. After this is done, restart all the aeolus-services on both physical-cloud-engine and virtual-cloud-engine to make sure they are using the right configurations.

physical-cloud-engine# aeolus-restart-services
virtual-cloud-engine# aeolus-restart-services

Once this is complete, you should be able to build images on the remote imagefactory instance.

Multiple Cloud Engines sharing a single imagefactory

It should be noted that running a single imagefactory to support multiple Cloud Engine’s is not officially supported, and is probably not tested. In my experience, however, it seems to work. I hope to have time to post something with more details on the performance of utilizing a single imagefactory between multiple cloud engine’s performing concurrent build and push operations in the future.

Elasticity in the Open Hybrid Cloud

Several months ago in my post on Open Hybrid PaaS I mentioned that OpenShift, Red Hat’s PaaS can autoscale gears to provide elasticity to applications. OpenShift scales gears on something it calls a node, which is essentially a virtual machine with OpenShift installed on it. One thing OpenShift doesn’t focus on is scaling the underlying nodes. This is understandable, because a PaaS doesn’t necessarily understand the underlying infrastructure, nor does it necessarily want to.

It’s important that nodes are able to be autoscaled in a PaaS. I’d take this one step further and submit that it’s important that operating systems are able to be autoscaled at the IaaS layer. This is partly because many PaaS solutions will be built atop an Operating System. Even more importantly, Red Hat is all about enabling an Open Hybrid Cloud and one of the benefits Open Hybrid Cloud wants to deliver is cloud efficiency across an organizations entire datacenter and not just a part of it. If you need to statically deploy Operating Systems you fail to achieve the efficiency of cloud across all of your resources. You also can’t re-purpose or shift physical resources if you can’t autoscale operating systems.

Requirements for a Project

The background above presents the basis for some requirements for an operating system auto-scaling project.

  1. It needs to support deploying across multiple virtualization technologies. Whether a virtualization provider, IaaS private cloud, or public cloud.
  2. It needs to support deploying to physical hardware.
  3. It cannot be tied to any single vendor, PaaS, or application.
  4. It needs to be able to configure the operating systems deployed upon launch for handing over to an application.
  5. It should be licensed to promote reuse and contribution.

Workflow

Here is an idea for a project that could solve such a problem, which I call “The Governor”.

Example Workflow

Example Workflow

To explain the workflow:

  1. The application realizes it needs more resources. Monitoring of the application to determine whether it needs more resources is not within the scope of The Governor. This is by design as there are countless applications and each one of them has different requirements for scalability and elasticity. For this reason, The Governor lets the applications make the determination for when to request more resources. When the application makes this determination it makes a call to The Governor’s API.
  2. The call to the API involves the use of a certificate for authentication. This ensures that only applications that have been registered in the registry can interact with The Governor to request resources. If the certificate based authentication works (the application is registered in The Governor) then the workflow proceeds. If not, the applications request is rejected.
  3. Upon receiving an authenticated request for more resources the certificate (which is unique) is run through the rules engine to determine the rules the application must abide by when scaling. This would include decision points such as which providers can the application scale on, how many instances can the application consume, etc. If the scaling is not permitted by the rules (maximum number of instances is reached, etc) then the response is sent back to the application informing it the request has been declined.
  4. Once the rules engine determines the appropriate action it calls the orchestrator which initiates the action.
  5. The orchestrator calls either the cloud broker, which can launch instances to a variety of virtualization managers and cloud providers, either private or public, or a metal as a service (MaaS), which can provision an operating system on bare metal.
  6. and 7.  The cloud broker or MaaS launch or provision the appropriate operating system and configure it per the application’s requirements.

Future Details

There are more details which need to be further developed:

  • How certificates are generated and applications are registered.
  • How application registration details, such as the images that need to be launched and the configurations that need to be implemented on them are expressed.
  • How the configured instance is handed back to the application for use by the application.

Where to grow a community?

It matters where this project will ultimately live and grow. A project such as this one would need broad community support and a vibrant community in order to gain adoption and support to become a standard means of describing elasticity at the infrastructure layer. For this reason a community with a large number of active participants and friendly licensing which promotes contribution should be it’s location.

Tagged , , , , ,

White Paper: Red Hat CloudForms – Delivering Managed Flexibility For The Cloud

Businesses continually seek to increase flexibility and agility in order to gain competitive advantage and reduce operating cost. The rise of public cloud providers offers one method by which businesses can achieve lower operating costs while gaining competitive advantage. Public clouds provide these advantages by allowing for self-service, on-demand access of compute resources. While the use of public clouds has increased flexibility and agility while reducing costs, it has also presented new challenges in the areas of portability, governance, security, and cost.

These challenges are a result of business users lacking the incentive to adequately govern, secure, and budget for applications deployed in the cloud. As a result, IT organizations have looked to replicate public cloud models in order to convince business users to utilize internal private clouds. An internal cloud avoids the challenges of portability, governance, security, and cost that are associated with public clouds. While the increased cost and inflexibility of private clouds may be justified, the challenges public clouds face make it clear that a solution that allows businesses to seamlessly move applications between cloud providers (both public and private) is critical. A hybrid cloud built with heterogeneous technologies allows business users to benefit from flexibility and agility, while IT maintains
governance and control.

With Red Hat CloudForms, businesses no longer have to choose between providing flexibility and agility to end users through the use of cloud computing or maintaining governance and control of their IT assets. Red Hat CloudForms is an open hybrid cloud management platform that delivers the flexibility and agility businesses want with the control and governance that IT requires. Organizations can build a hybrid cloud that encompasses all of their infrastructure using CloudForms and manage cloud applications without vendor lock-in.

CloudForms implements a layer of abstraction on top of cloud resources–private cloud providers, public cloud providers, and virtualization providers. That abstraction is expressed as the ability to partition and organize cloud resources as seemingly independent clouds to which users can deploy and manage AppForms–CloudForms cloud applications. CloudForms achieves these benefits by allowing users to:

• build clouds for controlled agility
• utilize a cloud-centric deployment and management model
• enable policy-based self-service for end users

This document explains how CloudForms meets the challenges that come from letting users serve themselves, while maintaining control of where workloads are executed and ensuring the life cycle is properly managed. IT organizations are able to help their customers better utilize the cloud or virtualization provider that best meets the customer’s needs while solving the challenges of portability, governance, security, and cost.

Read the White Paper

Tagged , , , , , ,