Category Archives: IaaS

OPENSTACK DEPLOYMENT AND AUTOMATION

Here is the Presentation that Thomas, Chris, and I presented today at the OpenStack Meetup in Sunnyvale. Thanks to the folks at Walmart Labs for hosting us!

Tagged , , ,

A Technical Overview of Red Hat Cloud Infrastructure (RHCI)

I’m often asked for a more in-depth overview of Red Hat Cloud Infrastructure (RHCI), Red Hat’s fully open source and integrated Infrastructure-as-a-Service offering. To that end I decided to write a brief technical introduction to RHCI to help those interested better understand what a typical deployment looks like, how the components interact, what Red Hat has been working on to integrate the offering, and some common use cases that RHCI solves. RHCI gives organizations access to infrastructure and management to fit their needs, whether it’s managed datacenter virtualization, a scale-up virtualization-based cloud, or a scale-out OpenStack-based cloud. Organizations can choose what they need to run and re-allocate their resources accordingly.

001_overview

RHCI users can choose to deploy either Red Hat Enterprise Virtualization (RHEV) or Red Hat Enterprise Linux OpenStack Platform (RHEL-OSP) on physical systems to create a datacenter virtualization-based private cloud using RHEV or a private Infrastructure-as-a-Service cloud with RHELOSP.

RHEV comprises a hypervisor component, referred to as RHEV-H, and a manager, referred to as RHEV-M. Hypervisors leverage shared storage and common networks to provide common enterprise virtualization features such as high availability, live migration, etc.

RHEL-OSP is Red Hat’s OpenStack distribution that provides massively scalable infrastructure by providing the following projects (descriptions taken directly from the projects themselves) for use on one of the largest ecosystems of certified hardware and software vendors for OpenStack:

Nova: Implements services and associated libraries to provide massively scalable, on demand, self service access to compute resources, including bare metal, virtual machines, and containers.

Swift: Provides Object Storage.

Glance: Provides a service where users can upload and discover data assets that are meant to be used with other services, like images for Nova and templates for Heat.

Keystone: Facilitate API client authentication, service discovery, distributed multi-tenant authorization, and auditing.

Horizon: Provide an extensible unified web- based user interface for all integrated OpenStack services.

Neutron: Implements services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction.

Cinder: Implements services and libraries to provide on-demand, self-service access to Block Storage resources via abstraction and automation on top of other block storage devices.

Ceilometer: Reliably collects measurements of the utilization of the physical and virtual resources comprising deployed clouds, persist these data for subsequent retrieval and analysis, and trigger actions when defined criteria are met.

Heat: Orchestrates composite cloud applications using a declarative template format through an OpenStack-native ReST API.

Trove: Provides scalable and reliable Cloud Database as a Service functionality  for both relational and non-relational database engines, and to continue to improve its fully-featured and extensible open source framework.

Ironic: Produces an OpenStack service and associated python libraries capable of managing and provisioning physical machines, and to do this in a security-aware and fault-tolerant manner.

Sahara: Provides a scalable data processing stack and associated management interfaces.

Red Hat CloudForms, a Cloud Management Platform based on the upstream ManageIQ project, provides hybrid cloud management of OpenStack, RHEV, Microsoft Hyper-V, VMware vSphere, and Amazon Web Services. This includes the ability to provide rich self-service with workflow and approval, discovery of systems, policy definition, capacity and utilization forecasting, and chargeback among others capabilities. CloudForms is deployed as a virtual appliance and requires no agents on the systems it manages. CloudForms has a region and zone concept that allows for complex and federated deployments across large environments and geographies.

Red Hat Satellite is a systems management solution for managing the lifecycle of RHEV, RHEL-OSP, and CloudForms as well as any tenant workloads that are running on RHEV or RHEL-OSP. It can be deployed on bare metal or, as pictured in this diagram, as a virtual machine running on either RHEV or RHEL-OSP. Satellite supports a federated model through a concept called capsules.
002_cloudmanagement

CloudForms is a Cloud Management Platform that is deployed as a virtual appliance and supports a federated deployment. It is fully open source just as every component in RHCI is and is based on the ManageIQ project.

One of the key technical benefits CloudForms provides is unified management of multiple providers. CloudForms splits providers into two types. First, there are infrastructure providers such as RHEV, vSphere, and Microsoft Hyper-V. CloudForms discovers and provides uniform information about these systems hosts, clusters, virtual machines, and virtual machine contents in a single interface. Second, there are cloud providers such as RHEL-OSP and Amazon Web Services. CloudForms provides discovery and uniform information for these providers about virtual machines, images, flavors similar to the infrastructure providers. All this is done by leveraging standard APIs provided from RHEV-M, SCVMM, vCenter, AWS, and OpenStack.

003_lifecyclemanagement

Red Hat Satellite provides common systems management among all aspects of RHCI.

Red Hat Satellite provides content management, allowing users to synchronize content such as RPM packages for RHEV, RHEL-OSP, and CloudForms from Red Hat’s Content Delivery Network, to an on-premises Satellite reducing bandwidth consumption and providing an on-premises control point for content management through complex environments. Satellite also allows for configuration management via Puppet to ensure compliance and enforcement of proper configuration. Finally, Red Hat Satellite allows users to account for usage of assets through entitlement reporting and controls. Satellite provides these capabilities to RHEV, RHEL-OSP, and CloudForms, allowing administrators of RHCI to maintain their environment more effectively and efficiently. Equally as important is that Satellite also extends to the tenants of RHEV and RHEL-OSP to allow for systems management of Red Hat Enterprise Linux  (RHEL) based tenants. Satellite is based on the upstream projects of Foreman, Katello, Pulp, and Candlepin.

004_lifecyclemanagementtenant

The combination of CloudForms and Satellite is very powerful for automating not only the infrastructure, but within the operating system as well. Let’s look at an example of how CloudForms can be utilized with Satellite to provide automation of deployment and lifecycle management for tenants.

The automation engine in CloudForms is invoked when a user orders a catalog item from the CloudForms self-service catalog. CloudForms communicates with the appropriate infrastructure provider (in this case RHEV or RHEL-OSP pictured) to ensure that the infrastructure resources are created. At the same time it also ensures the appropriate records are created in Satellite so that the proper content and configuration will be applied to the system. Once the infrastructure resources are created (such as a virtual machine), they are connected to Satellite where they receive the appropriate content and configuration. Once this is completed, the service in CloudForms is updated with the appropriate information to reflect the state of the users request allowing them access to a fully compliant system with no manual interaction during configuration. Ongoing updates of the virtual machine resources can be performed by the end user or the administrator of the Satellite dependent on the customer needs.

005_servicelifecyclemanagement

This is another way of looking at how the functional areas of the workflow are divided in RHCI. Items such as the service catalog, quota enforcement, approvals, and workflow are handled in CloudForms, the cloud management platform. Even still, infrastructure-specific mechanisms such as heat templates, virtual machine templates, PXE, or even ISO-based deployment are utilized by the cloud management platform whenever possible. Finally, systems management is used to provide further customization within the operating system itself that is not covered by infrastructure specific provisioning systems. With this approach, users can separate operating system configuration from the infrastructure platform thus increasing portability. Likewise, operational decisions are decoupled from the infrastructure platform and placed in the cloud management platform allowing for greater flexibility and increased modularity.

006_sharedidentity

Common management is a big benefit that RHCI brings to organizations, but it doesn’t stop there. RHCI is bringing together the benefits of shared services to reduce the complexity for organizations. Identity is one of the services that can be made common across RHCI through the use of Identity Management (IDM) that is included in RHEL. All components of RHCI can be configured to talk to IDM which in turn can be used to authenticate and authorize users. Alternatively, and perhaps more frequently, a trust is established between IDM and Active Directory to allow for authentication via Active Directory. By providing a common identity store between the components of RHCI, administrators can ensure compliance through the use of access controls and audit.

007_sharednetwork

Similar to the benefits of shared identity, RHCI is bringing together a common network fabric for both traditional datacenter virtualization and infrastructure-as-a-service (IaaS) models. As part of the latest release of RHEV, users can now discover neutron networks and begin exposing them to guest virtual machines (in tech preview mode). By building a common network fabric organizations can simplify their architecture. No longer do they need to learn two different methods for creating and maintaining virtual networks.

008_sharedstorage

Finally, Image storage can now be shared between RHEV and RHEL-OSP. This means that templates and images stored in Glance can be used by RHEV. This reduces the amount of storage required to maintain the images and allows administrators to update images in one store instead of two, increasing operational efficiency.

009_capabilities

One often misunderstood area is around what capabilities are provided by which components of RHCI.  RHEV and OpenStack provide similar capabilities with different paradigms. These focus around compute, network, and storage virtualization. Many of the capabilities often associated with a private cloud include features found in the combination of Satellite and CloudForms. These include capabilities provided by CloudForms such as discovery, chargeback, monitoring, analytics, quota Enforcement, capacity planning, and governance. They also include capabilities that revolve around managing inside the guest operating system in areas such as content management, software distribution, configuration management, and governance.

010_deploymentscenarios

Often organizations are not certain about the best way to view OpenStack in relation to their datacenter virtualization solution. There are two common approaches that are considered. Within one approach, datacenter virtualization is placed underneath OpenStack. This approach has several negative aspects. First, it places OpenStack, which is intended for scale out, over an architecture that is designed for scale up in RHEV, vSphere, Hyper-V, etc. This gives organizations limited scalability and, in general, an expensive infrastructure for running a scale out IaaS private cloud. Second, layering OpenStack, a Cloud Infrastructure Platform, on top of yet another infrastructure management solution makes hybrid cloud management very difficult because Cloud Management Platforms, such as CloudForms, are not designed to relate OpenStack to a virtualization manager and then to underlying hypervisors. Conversely, by using a Cloud Management Platform as the aggregator between infrastructure platforms of OpenStack, RHEV, vSphere, and others, it is possible to achieve a working approach to hybrid cloud management and use OpenStack in the massively scalable way it is designed to be used.

011_vmware_rhev

RHCI is meant to complement existing investments in datacenter virtualization. For example, users often utilize CloudForms and Satellite to gain efficiencies within their vSphere environment while simultaneously increasing the cloud-like capabilities of their virtualization footprints through self-service and automation. Once users are comfortable with the self-service aspects of CloudForms, it is simple to supplement vSphere with lower cost or specialized virtualization providers like RHEV or Hyper-V.

This can be done by leveraging the virt-v2v tools (shown as option 1 in the diagram above) that perform binary conversion of images in an automated fashion from vSphere to other platforms. Another approach is to standardize environment builds within Satellite (shown as option 2 in the diagram above) to allow for portability during creation of a new workload. Both of these methods are supported based on an organization’s specific requirements.

012_vmware_openstack

For scale-out applications running on an existing datacenter virtualization solution such as VMware vSphere RHCI can provide organizations with the tools to identify (discover), and move (automated v2v conversion), workloads to Red Hat Enterprise Linux OpenStack Platform where they can take advantage of massive scalability and reduced infrastructure costs. This again can be done through binary conversion (option 1) using CloudForms  or through standardization of environments (option 2) using Red Hat Satellite.

013_management_integrations

So far I have focused primarily on the integrations between the components of Red Hat Cloud Infrastructure to illustrate how Red Hat is bringing together a comprehensive Infrastructure-as-a-Service solution, but RHCI integrates with many existing technologies within the management domain. From integrations with configuration management solutions such as Puppet, Chef, and Ansible, and many popular Configuration Management Databases (CMDBs) as well networking providers and IPAM systems, CloudForms and Satellite are extremely extensible to ensure that they can fit into existing environments.

014_infra_integrations

And of course, with Red Hat Enterprise Linux forming the basis of both Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform leading to one of the largest ecosystems of certified compute, network, and storage partners in the industry.

RHCI is a complete and fully open source infrastructure-as-a-service private cloud. It has industry leading integration between a datacenter virtualization and openstack based private cloud in the areas of networking, storage, and identity. A common management framework makes for efficient operations and unparallelled automation that can also span other providers. Finally, by leveraging RHEL and Systems Management and Cloud Management Platform based on upstream communities it has a large ecosystem of hardware and software partners for both infrastructure and management.

I hope this post helped you gain a better understanding of RHCI at a more technical level. Feel free to comment and be sure to follow me on twitter @jameslabocki

Tagged , , , , , , , , , ,

Deploying OpenShift with CloudForms Presentation

Slides from my talk on Deploying OpenShift with CloudForms can be downloaded here.

OpenStack Packstack Installation with External Connectivity

Packstack makes installing OpenStack REALLY easy. By using the –allinone option you could have a working self-contained RDO installation in minutes (and most of those minutes are spent waiting for packages to install). However, the –allinone option really should be renamed to the –onlywithinone today, because while it makes the installation very simple it doesn’t allow for instances spun up on the resulting OpenStack environment to be reachable from external systems. This can be a problem if you are trying to both bring up an OpenStack environment quickly and demonstrate integration with systems outside of OpenStack. With a lot of help and education from Perry Myers and Terry Wilson on Red Hat’s RDO team I was able to make a few modifications to the packstack installation to allow a user to use the packstack installation with –allinone and have external access to the instances launched on the host. While I’m not sure this is the best practice for setup here is how it works.

I started with a @base kickstart installation of Red Hat Enterprise Linux 6.4. First, I subscribed the system via subscription manager and subscribed to the rhel server repository. I also installed the latest RDO repository file for Grizzly and then updated the system and installed openvswitch. The update will install a new kernel.

# subscription-manager register
...
# subscription-manager list --available |egrep -i 'pool|name'
...
# subscription-manager attach --pool=YOURPOOLIDHERE
...
# rpm -ivh http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly.rpm
...
# yum -y install openvswitch
...
# yum -y update

Before I rebooted I setup a bridge named br-ex by placing the following in /etc/sysconfig/network-scripts/ifcfg-br-ex.

DEVICE=br-ex
OVSBOOTPROTO=dhcp
OVSDHCPINTERFACES=eth0
NM_CONTROLLED=no
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=ovs

I also changed the setup of the eth0 interface by placing the following in /etc/sysconfig/network-scripts/ifcfg-eth0. The configuration would make it belong to the bridge we previously setup.

DEVICE="eth0"
HWADDR="..."
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
UUID="..."
ONBOOT=yes
NM_CONTROLLED=no

At this point I rebooted the system so the updated kernel could be used. When it comes back up you should have a bridged interface named br-ex which has the IP address that was associated with eth0. I had a static leased DHCP entry for eth0 prior to starting, so even though the interface was set to use DHCP as it’s bootproto it receives the same address consistently.

Now you need to install packstack.

# yum -y install openstack-packstack

Packstack’s installation accepts an argument named quantum-l3-ext-bridge.

–quantum-l3-ext-bridge=QUANTUM_L3_EXT_BRIDGE
The name of the bridge that the Quantum L3 agent will
use for external traffic, or ‘provider’ if using
provider networks

We will set this to eth0 so that the eth0 interface is used for external traffic. Remember, eth0 will be a port on br-ex in openvswitch, so it will be able to talk to the outside world through it.

Before we run the packstack installer though, we need to make another change. Packstack’s –allinone installation uses some puppet templates to provide answers to the installation options. It’s possible to override the options if there is a command line switch, but packstack doesn’t accept arguments for everything. For example, if you want to change the floating IP range to fall in line with the network range your eth0 interface supports then you’ll need to edit a puppet template by hand.

Edit /usr/lib/python2.6/site-packages/packstack/puppet/modules/openstack/manifests/provision.pp and change $floating_range to a range that is suitable for the network eth0 is on. The floating range variable appears to be used for assigning the floating IP address pool ranges by packstack when –allinone is used.

One last modification before we run packstack, and thanks to Terry Wilson for pointing this out, we need to remove a a firewall rule that is added during the packstack run that adds a NAT rule which will effectively block inbound traffic to a launched instance. You can edit /usr/lib/python2.6/site-packages/packstack/puppet/templates/provision.pp and comment out the following lines.

firewall { '000 nat':
  chain  => 'POSTROUTING',
  jump   => 'MASQUERADE',
  source => $::openstack:rovision::floating_range,
  outiface => $::gateway_device,
  table => 'nat',
  proto => 'all',
}

The ability to configure these via packstack arguments should eventually make it’s way into packstack. See this Bugzilla for more information.

That’s it, now you can fire up packstack by running the following command.

packstack --allinone —quantum-l3-ext-bridge=eth0

When it completes it will tell you that you need to reboot for the new kernel to take effect, but you don’t need to since we already updated after running yum update with the RDO repository in place.

Your openvswitch configuration should look roughly like this when packstack finishes running.

# ovs-vsctl show
08ad9137-5eae-4367-8c3e-52f8b87e5415
    Bridge br-int
        Port "tap46aaff1f-cd"
            tag: 1
            Interface "tap46aaff1f-cd"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qvod54d32dc-0b"
            tag: 1
            Interface "qvod54d32dc-0b"
        Port "qr-0638766f-76"
            tag: 1
            Interface "qr-0638766f-76"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-3f967843-48"
            Interface "qg-3f967843-48"
                type: internal
        Port "eth0"
            Interface "eth0"
    ovs_version: "1.11.0"

Before we start provisioning instances in Horizon let’s take care of one last step and add two security group rules to allow ssh and icmp to our instances.

# . ~/keystonerc_demo 
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

Now you can log into horizon with the demo user whose credentials are stored in /root/keystonerc_demo and provision an instance. Make sure you specify the private network for this instance. The private network is automatically created for the demo tenant by the packstack –allinone installation. You’ll also notice it uploaded an image named cirros into glance for you. Of course, this assumes you’ve already created a keypair.

Screen Shot 2013-08-23 at 10.36.44 PM

Screen Shot 2013-08-23 at 10.36.55 PM

Once the instance is launched we will then associate a floating IP address with it.

Screen Shot 2013-08-23 at 10.44.37 PM

Screen Shot 2013-08-23 at 10.44.50 PM

Screen Shot 2013-08-23 at 10.45.00 PM

Now we can ssh to it from outside the

$ ssh cirros@10.16.132.4
cirros@10.16.132.4's password: 
$ uptime
 00:52:20 up 14 min,  1 users,  load average: 0.00, 0.00, 0.00
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Now we can get started with the fun stuff, like provisioning images from CloudForms onto RDO and using Foreman to automatically configure them!

Building the Bridge Between Present and Future IT Architectures

Life isn’t easy for IT organizations today. They find themselves on the receiving end of demands for new capabilities that public cloud providers are delivering at increasing speed. While solutions within the datacenter are beginning to deliver these same capabilities in the private datacenter the IT organization doesn’t want to build yet another silo. Red Hat’s Open Hybrid Cloud Architecture is helping IT organizations adopt next generation IT architectures to meet the increasing demands for public cloud capability while helping them establish a common framework for all their IT assets. This approach provides a lot of benefits across all IT architectures. To name a few:

  • Discovery and Reporting: Detailed information about all workloads across all cloud and virtualization providers.
  • Self-Service: A single catalog which could provision services across hybrid and heterogeneous public and private clouds.
  • Best-Fit Placement: Helping identify which platform is best for which workload both at provision and run-time.

The engineers at Red Hat have been hard at work on the next release of CloudForms which is scheduled for General Availability later this year. I’ve been lucky enough to get my hands on a very early preview and wanted to share an update on two enhancements that are relevant to the topic of bridging present and future IT architectures. Before I dive into the enhancements let me get two pieces of background out of the way:

  1. Red Hat believes that the future IT architecture for Infrastructure as a Service (IaaS) is OpenStack. That shouldn’t come as a big surprise given that Red Hat was a major contributor to the Grizzly OpenStack Release and has established a community for it’s distribution called RDO.
  2. There is a big difference between datacenter virtualization and clouds and knowing which workloads should run on which is important. For more information on this you can watch Andy Cathrow’s talk at Red Hat Summit.

Two of the enhancements coming in the next release of CloudForms are the clear distinction between datacenter virtualization and cloud providers and the addition of OpenStack as a supported cloud provider.

In clearly separating and understanding the differences between datacenter virtualization (or infrastructure providers as it’s called in the user interface) and cloud providers CloudForms will understand exactly how to operationally manage and standardize operational concepts across Red Hat Enterprise Virtualization, VMware vSphere, Amazon EC2, and OpenStack.

Cloud Providers

CloudProviders

Infrastructure (Datacenter Virtualization) Providers

InfraProviders

Also, as you noticed in the previous screens CloudForms will support OpenStack as a cloud provider. This is critical to snapping in another piece of the puzzle of Red Hat’s Open Hybrid Cloud Architecture and providing all the operational management capabilities to OpenStack that IT organizations need.

OpenStack Cloud Provider

OpenStackProvider

These two enhancements will be critical for organizations who want a single pane of glass to operationally manage their Open Hybrid Cloud.

Single Pane Operational Management of RHEV, vSphere, AWS EC2, and OpenStack

SinglePane

Stay tuned for more updates regarding the next release of CloudForms!

Accelerating Service Delivery While Avoiding Silos

In a prior post on Red Hat’s Open Hybrid Cloud Architecture I discussed how IT consumers, having experienced the power of the public cloud are pressing Enterprise IT to deliver new capabilities. One of these capabilities is accelerated service delivery, or the ability to more quickly develop and release new applications that meet a businesses need. In this post I’d like to examine how the Open Hybrid Cloud Architecture provides the means to satisfy this capability and how it is different then other approaches.

There are 1000 vendors who can provide accelerated service delivery, why not just buy a product?
Many vendors will try to sell a single product as being able to accelerate service delivery. The problem with this approach is that accelerating service delivery goes far beyond a single product. This is because no single product can provide all the necessary components of application development that an IT consumer could want. Think about all the languages, frameworks, and technologies from Java, .NET, node.js to Hadoop, Casandra, Mongo to <insert your favorite technology name here>. The availability of these languages from a single product, vendor, or operating system in an optimized manner is highly unlikely. An approach that tries to accelerate service delivery within a single product or technology creates yet another silo and doesn’t solve the fundamental problem of accelerating service delivery across all an IT organization’s assets.

How can Enterprise IT provide accelerated service delivery capabilities while avoiding a silo?
By leveraging an architecture that is flexible and where each component is aware of it’s neighbors, organizations can accelerate service delivery without building a silo. Even better, having a component within your architecture that has a comprehensive understanding of every other component means virtually endless possibility for workload deployment and management. Want to deploy your workload as a VM using PXE on Red Hat Enterprise Virtualization, a template within VMWare vSphere, instances on OpenStack using Heat, or a gear in OpenShift? You can only do that if you understand each one of those technologies. Don’t build your logic for operations management into a single layer – keep it abstracted to ensure you can plug in whichever implementation of IaaS and PaaS best meets your needs. Does your application maintain too much state locally or scale vertically? Then it belongs on a traditional virtualization platform like VMware or RHEV. Is it a stateless scale out application? Then you can deploy on OpenStack. Are the languages and other dependencies available within a PaaS? Then it belongs in OpenShift. However, just deploying to each of those platforms is not enough.  What about deploying one part of your workload as gears in OpenShift and another part as instances on OpenStack at the same time? You must be able to deploy to ALL platforms within the same workload definition! The Open Hybrid Cloud Architecture is providing the foundation for such flexibility in deployment and management of workloads in the cloud.

Can you provide an example?
Let’s look at an example of a developer who would like to develop a new application for the finance team within his organization. The developer would like to utilize ruby as a web front end and utilize .NET within an IIS application server to perform some other functions. This developer expects the same capabilities that he gets using Google App Engine in that he wants to be able to push code and have it running in seconds. The user wants to request a catalog item from CloudForms which will provide them with the two components. The first is a ruby application running in the OpenShift PaaS. The second is a virtual machine running on either Red Hat Enterprise Virtualization, VMware vSphere, or Red Hat Open Stack. The service designer who designed this catalog bundle recognized that ruby applications can run in OpenShift and because OpenShift provides greater efficiencies for hosting applications then running the application within it’s own virtual machine the designer ensured that the component run in the PaaS layer. OpenShift also provides automation of the software development process which will give the end user of the designed service greater velocity in development. Since the IIS application server wasn’t available within the PaaS layer, the service designer utilized a virtual machine at the datacenter virtualization layer (vSphere) to provide this capability.

Step by Step
diagram01

1. The user requests the catalog item. CloudForms could optionally provide workflow (approval, quota, etc) and best fit placement at this point.

2. CloudForms provisions the ruby application in OpenShift Enterprise. The Ruby application is running as a gear.

3. CloudForms orchestrates the adding of an action hook into the OpenShift deployment. This can be done using any configuration management utility. I used puppet and The Foreman in my demo video below.

4. The user begins developing their ruby application. They clone the repository and then commit and push the changes.

5. The action hook within OpenShift is triggered by the deploy stage of the OpenShift lifecycle and calls CloudForms API requesting a virtual machine be created.

6. CloudForms provisions the virtual machine.

This is really just the beginning of the process, but hopefully you can see where it’s going. CloudForms can perform the deployment and tear down of the virtual machines each time a developer updates their application in OpenShift. It can even tie into other continuous integration systems to deploy application code into the IIS application server. This rapid delivery of the environment is taking place across both the PaaS and IaaS. It also doesn’t try to invent a new “standard description” across all different types of models, instead it understands the models and methods of automation within each component of the architecture and orchestrates them. While the virtual machines running at the IaaS layer don’t provide the same level of density as the PaaS, CloudForms and OpenShift can be combined to provide similar operational efficiency and expand the capabilities of OpenShift’s Accelerated Service Delivery across an IT organizations entire base of assets.

I still don’t believe you, can you show me?
Want to see it in action? Check out this short video demonstration in either Ogg or Quicktime format.

You can download the action hook here.

You can download the OpenOffice Draw Diagram here.

This is cool, what would be even cooler?
If the client tools could be intercepted by CloudForms it could provide a lot of operational management capabilities to OpenShift. For example, when `rhc app create` is run CloudForms could provide approvals, workflow, quota to the OpenShift applications. Or perhaps a future command such as `rhc app promote` could utilize the approvals and automation engine inside CloudForms to provide controlled promotions of applications through a change control process.

Auto Scaling OpenShift Enterprise Infrastructure with CloudForms Management Engine

OpenShift Enterprise, Red Hat’s Platform as a Service (PaaS), handles the management of application stacks so developers can focus on writing code. The result is faster delivery of services to organizations. OpenShift Enterprise runs on infrastructure, and that infrastructure needs to be both provisioned and managed. While provisioning OpenShift Enterprise is relatively straightforward, managing the lifecycle of the OpenShift Enterprise deployment requires the same considerations as other enterprise applications such as updates and configuration management. Moreover, while OpenShift Enterprise can scale applications running within the PaaS based on demand the OpenShift Enterprise infrastructure itself is static and unaware of the underlying infrastructure. This is by design, as the mission of the PaaS is to automate the management of application stacks and it would limit flexibility to tightly couple the PaaS with the compute resources at both the physical and virtual layer. While this architectural decision is justified given the wide array of computing platforms that OpenShift Enterprise can be deployed upon (any that Red Hat Enterprise Linux can run upon) many organizations would like to not only dynamically scale their applications running in the PaaS, but dynamically scale the infrastructure supporting the PaaS itself. Organizations that are interested in scaling infrastructure in support of OpenShift Enterprise need not look further then CloudForms, Red Hat’s Open Hybrid Cloud Management Framework. CloudForms provides the capabilities to provision, manage, and scale OpenShift Enterprise’s infrastructure automatically based on policy.

For reference, the two previous posts I authored covered deploying the OpenShift Enterprise Infrastructure via CloudForms and deploying OpenShift Enterprise Applications (along with IaaS elements such as Virtual Machines) via CloudForms. Below are two screenshots of what this looks like for background.

image01

Operations User Deploying OpenShift Enterprise Infrastructure via CloudForms

image02

Self-Service User Deploying OpenShift Application via CloudForms

Let’s examine how these two automations can be combined to provide auto scaling of infrastructure to meet the demands of a PaaS. Today, most IT organizations monitor applications and respond to notifications after the event has already taken place – particularly when it comes to demand upon a particular application or service. There are a number of reasons for this approach, one of which is a result of the historical “build to spec” systems that existed in historical and currently designed application architectures. As organizations transition to developing new applications on a PaaS, however, they are presented with an opportunity to reevaluate the static and often oversubscribed nature of their IT infrastructure. In short, while applications designed in the past were not [often] built to scale dynamically based on demand, the majority of new applications are, and this trend is accelerating. Inline with this accelerating trend the infrastructure underlying these new expectations must support this new requirement or much of the business value of dynamic scalability will not be realized. You could say that an organizations dynamic scalability is bounded by their least scalable layer. This also holds true for organizations that intend to run solely on a public cloud and will leverage any resources at the IaaS layer.

Here is an example of how scalability of a PaaS would currently be handled in many IT organizations.

diagram03

The operations user is alerted by a monitoring tool that the PaaS has run out of capacity to host new or scale existing applications.

diagram04

The operations user utilizes the IaaS manager to provision new resources (Virtual Machines) for the PaaS.

diagram05

The operations user manually configures the new resources for consumption by the PaaS.

Utilizing CloudForms to deploy manage, and automatically scale OpenShift Enterprise alleviates the risk of manual configuration from the operations user while dynamically reclaiming unused capacity within the infrastructure. It also reduces the cost and complexity of maintaining a separate monitoring solution and IaaS manager. This translates to lower costs, greater uptime, and the ability to serve more end users. Here is how the process changes.

diagram06

By notification from the PaaS platform or in monitoring the infrastructure for specific conditions CloudForms detects that the PaaS Infrastructure is reaching its capacity. Thresholds can be defined by a wide array of metrics already available within CloudForms, such as aggregate memory utilized, disk usage, or CPU utilization.

diagram07

CloudForms examines conditions defined by the organization to determine whether or not the PaaS should receive more resources. In this case, it allows the PaaS to have more resources and provisions a new virtual machine to act as an OpenShift Node. At this point CloudForms could require approval of the scaling event before moving forward. The operations user or a third party system can receive an alert or event, but this is informational and not a request for the admin to perform any manual actions.

diagram08

Upon deploying the new virtual machine CloudForms configures it appropriately. This could mean installing the VM from a provisioning system or utilizing a pre-defined template and registering it to a configuration management system such as one based on puppet or chef that configure the system.

Want to see  a prototype in action? Check out the screencast I’ve recorded.

This same problem (the ability to dynamically scale a platform) exists between the IaaS and physical layer. If the IaaS layer runs out of resources it is often not aware of the physical resources available for it to consume. This problem is not found in a large number of organizations because dynamically re-purposing physical hardware has a smaller and perhaps more specialized set of use cases (think HPC, grid, deterministic workloads). Even though this is the case it should be noted that CloudForms is able to provide a similar level of policy based automation to physical hardware to extend the capacity of the IaaS layer if required.