OpenStack Packstack Installation with External Connectivity

Packstack makes installing OpenStack REALLY easy. By using the –allinone option you could have a working self-contained RDO installation in minutes (and most of those minutes are spent waiting for packages to install). However, the –allinone option really should be renamed to the –onlywithinone today, because while it makes the installation very simple it doesn’t allow for instances spun up on the resulting OpenStack environment to be reachable from external systems. This can be a problem if you are trying to both bring up an OpenStack environment quickly and demonstrate integration with systems outside of OpenStack. With a lot of help and education from Perry Myers and Terry Wilson on Red Hat’s RDO team I was able to make a few modifications to the packstack installation to allow a user to use the packstack installation with –allinone and have external access to the instances launched on the host. While I’m not sure this is the best practice for setup here is how it works.

I started with a @base kickstart installation of Red Hat Enterprise Linux 6.4. First, I subscribed the system via subscription manager and subscribed to the rhel server repository. I also installed the latest RDO repository file for Grizzly and then updated the system and installed openvswitch. The update will install a new kernel.

# subscription-manager register
...
# subscription-manager list --available |egrep -i 'pool|name'
...
# subscription-manager attach --pool=YOURPOOLIDHERE
...
# rpm -ivh http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly.rpm
...
# yum -y install openvswitch
...
# yum -y update

Before I rebooted I setup a bridge named br-ex by placing the following in /etc/sysconfig/network-scripts/ifcfg-br-ex.

DEVICE=br-ex
OVSBOOTPROTO=dhcp
OVSDHCPINTERFACES=eth0
NM_CONTROLLED=no
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=ovs

I also changed the setup of the eth0 interface by placing the following in /etc/sysconfig/network-scripts/ifcfg-eth0. The configuration would make it belong to the bridge we previously setup.

DEVICE="eth0"
HWADDR="..."
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
UUID="..."
ONBOOT=yes
NM_CONTROLLED=no

At this point I rebooted the system so the updated kernel could be used. When it comes back up you should have a bridged interface named br-ex which has the IP address that was associated with eth0. I had a static leased DHCP entry for eth0 prior to starting, so even though the interface was set to use DHCP as it’s bootproto it receives the same address consistently.

Now you need to install packstack.

# yum -y install openstack-packstack

Packstack’s installation accepts an argument named quantum-l3-ext-bridge.

–quantum-l3-ext-bridge=QUANTUM_L3_EXT_BRIDGE
The name of the bridge that the Quantum L3 agent will
use for external traffic, or ‘provider’ if using
provider networks

We will set this to eth0 so that the eth0 interface is used for external traffic. Remember, eth0 will be a port on br-ex in openvswitch, so it will be able to talk to the outside world through it.

Before we run the packstack installer though, we need to make another change. Packstack’s –allinone installation uses some puppet templates to provide answers to the installation options. It’s possible to override the options if there is a command line switch, but packstack doesn’t accept arguments for everything. For example, if you want to change the floating IP range to fall in line with the network range your eth0 interface supports then you’ll need to edit a puppet template by hand.

Edit /usr/lib/python2.6/site-packages/packstack/puppet/modules/openstack/manifests/provision.pp and change $floating_range to a range that is suitable for the network eth0 is on. The floating range variable appears to be used for assigning the floating IP address pool ranges by packstack when –allinone is used.

One last modification before we run packstack, and thanks to Terry Wilson for pointing this out, we need to remove a a firewall rule that is added during the packstack run that adds a NAT rule which will effectively block inbound traffic to a launched instance. You can edit /usr/lib/python2.6/site-packages/packstack/puppet/templates/provision.pp and comment out the following lines.

firewall { '000 nat':
  chain  => 'POSTROUTING',
  jump   => 'MASQUERADE',
  source => $::openstack:rovision::floating_range,
  outiface => $::gateway_device,
  table => 'nat',
  proto => 'all',
}

The ability to configure these via packstack arguments should eventually make it’s way into packstack. See this Bugzilla for more information.

That’s it, now you can fire up packstack by running the following command.

packstack --allinone —quantum-l3-ext-bridge=eth0

When it completes it will tell you that you need to reboot for the new kernel to take effect, but you don’t need to since we already updated after running yum update with the RDO repository in place.

Your openvswitch configuration should look roughly like this when packstack finishes running.

# ovs-vsctl show
08ad9137-5eae-4367-8c3e-52f8b87e5415
    Bridge br-int
        Port "tap46aaff1f-cd"
            tag: 1
            Interface "tap46aaff1f-cd"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qvod54d32dc-0b"
            tag: 1
            Interface "qvod54d32dc-0b"
        Port "qr-0638766f-76"
            tag: 1
            Interface "qr-0638766f-76"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-3f967843-48"
            Interface "qg-3f967843-48"
                type: internal
        Port "eth0"
            Interface "eth0"
    ovs_version: "1.11.0"

Before we start provisioning instances in Horizon let’s take care of one last step and add two security group rules to allow ssh and icmp to our instances.

# . ~/keystonerc_demo 
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

Now you can log into horizon with the demo user whose credentials are stored in /root/keystonerc_demo and provision an instance. Make sure you specify the private network for this instance. The private network is automatically created for the demo tenant by the packstack –allinone installation. You’ll also notice it uploaded an image named cirros into glance for you. Of course, this assumes you’ve already created a keypair.

Screen Shot 2013-08-23 at 10.36.44 PM

Screen Shot 2013-08-23 at 10.36.55 PM

Once the instance is launched we will then associate a floating IP address with it.

Screen Shot 2013-08-23 at 10.44.37 PM

Screen Shot 2013-08-23 at 10.44.50 PM

Screen Shot 2013-08-23 at 10.45.00 PM

Now we can ssh to it from outside the

$ ssh cirros@10.16.132.4
cirros@10.16.132.4's password: 
$ uptime
 00:52:20 up 14 min,  1 users,  load average: 0.00, 0.00, 0.00
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Now we can get started with the fun stuff, like provisioning images from CloudForms onto RDO and using Foreman to automatically configure them!

Using the CloudForms Web Services API

The web services API provided by CloudForms Management Engine allows users to integrate external systems with CloudForms. For example, if you wanted an existing change control system to request services from a virtualization provider or public cloud you could call the CloudForms SOAP API to initiate the virtual machine provisioning request method. Keep in mind that automate methods within CloudForms can be used to do just about anything from opening a new incident in a change management system to checking the weather in your favorite city. In this post, however, I’ll provide a simple example of how the Savon soap client and Ruby can be used to make a request to CloudForms to launch a virtual machine.

First you’ll need to install a few ruby gems on your system if you don’t already have them.

# gem install savon
....
# gem install httpclient
....
# gem install openssl
....
# gem install httpi
....
# gem install pp
....
# vi myscript.rb

The first section of the script will specify the interpreter and import the gems we installed via require.

#!/usr/bin/ruby

require 'savon'
require 'httpi'
require 'httpclient'
require 'openssl'
require 'pp'

The next section will define a ruby module named DCA which contains a class named Worker, or a collection of methods and constants. The module will help provide a namespace so the class name doesn’t clash with an already existing class that may have a similar name if you include this code in a larger body of ruby. We will also define two methods. The build_automation_request method will handle executing the request against the CloudForms Management Engine Web Services API. It accepts a hash that it will pass to the web services API. The deploy_vm method will accept some arguments, provide others within it’s body and then instantiate the build_automation_request with body of the request. This includes things such as the template_name, vlan, vm_name, etc.


module DCA
  class Worker

    def build_automation_request(body_hash)
       # We will populate this with the request
    end

    def deploy_vm(template_name, vlan, ip, subnet, gateway, vm_name, domain_name, memory, cpus, add_disk, owner_email, customization_spec = 'linux')
       # We will build the request here, then pass it to build_automation_request

    end

  end
end

With the structure in place we can add the following to the build_automation_request method. Replace YOURUSER with your username, YOURPASSWORD with your password and CFMEIPADDRESS with the IP address of the CloudForms Management Engine running with the web services role enabled.


    def build_automation_request(body_hash)
      client = Savon.client(basic_auth: ["YOURUSER", "YOURPASSWORD"], ssl_verify_mode: :none, ssl_version: :TLSv1, wsdl: "https://CFMEIPADDRESS/vmdbws/wsdl")
      evm_response = client.call(:vm_provision_request, message: body_hash)
    end

Next we will populate the deploy_vm method. The contents of the method will build several arrays and then combine them into a hash which will be passed to the build_automation_request hash we created previously.


    def deploy_vm(template_name, vlan, ip, subnet, gateway, vm_name, domain_name, memory, cpus, add_disk, owner_email, customization_spec = 'linux')
      templateFields = []
      templateFields << "name=#{template_name}"
      templateFields << "request_type=template"
      vmFields = []
      vmFields << "vm_name=#{vm_name}"
      vmFields << "number_of_vms=1"
      vmFields << "vm_memory=#{memory}"
      vmFields << "number_of_cpus=#{cpus}"
#      The options below are useful for windows systems
#      options = []
#      options << "sysprep_custom_spec=#{customization_spec}"
#      options << "sysprep_spec_override=true"
#      options << "sysprep_domain_name=#{domain_name}"
      vmFields << "addr_mode=static"
      vmFields << "ip_addr=#{ip}"
      vmFields << "subnet_mask=#{subnet}"
      vmFields << "gateway=#{gateway}"
      vmFields << "vlan=#{vlan}"
      vmFields << "provision_type=PXE"
      requester = []
      #requester << "user_name=#{user_id}"
      requester << "owner_email=#{owner_email}"
      tags = []
      #options << "add_vdisk1=#{add_disk}"
      input =  {
          'version'        =>        '1.1',
          'templateFields'        =>        templateFields.join('|'),
          'vmFields'        =>        vmFields.join('|'),
          'requester'        =>        requester.join('|'),
          'tags'        =>        tags.join('|'),
          #'options'        =>        options
      }
      pp input
      response = build_automation_request(input)
      pp response
    end

Finally, we will create a new worker object by invoking the worker class and we will call the deploy_vm method associated with the worker object overriding the arguments that we wish to use.

w = DCA::Worker.new
r = w.deploy_vm("win2k8tmpl", "VM Network", "172.28.158.125", "255.255.255.128", "172.28.158.1",
                "VMNAME", "sys.mycustomer.net", 2048, 2, 15, "yourname@yourdomain.com", "linux")
puts "Guess what! I built me a vm! #{r}"

That’s it. When this script is executed it should print out the output of your hash along with a bunch of output from the result of the request. If all goes well you should end up with a virtual machine running on your provider!

# chmod 755 myscript.rb
# ./myscript.rb
.... 
Guess what! I built me a vm!"

Keep in mind you should utilize more robust error handling if you are serious and also use something more secure for authentication between remote systems.

You can download the entire script here.

Building the Bridge Between Present and Future IT Architectures

Life isn’t easy for IT organizations today. They find themselves on the receiving end of demands for new capabilities that public cloud providers are delivering at increasing speed. While solutions within the datacenter are beginning to deliver these same capabilities in the private datacenter the IT organization doesn’t want to build yet another silo. Red Hat’s Open Hybrid Cloud Architecture is helping IT organizations adopt next generation IT architectures to meet the increasing demands for public cloud capability while helping them establish a common framework for all their IT assets. This approach provides a lot of benefits across all IT architectures. To name a few:

  • Discovery and Reporting: Detailed information about all workloads across all cloud and virtualization providers.
  • Self-Service: A single catalog which could provision services across hybrid and heterogeneous public and private clouds.
  • Best-Fit Placement: Helping identify which platform is best for which workload both at provision and run-time.

The engineers at Red Hat have been hard at work on the next release of CloudForms which is scheduled for General Availability later this year. I’ve been lucky enough to get my hands on a very early preview and wanted to share an update on two enhancements that are relevant to the topic of bridging present and future IT architectures. Before I dive into the enhancements let me get two pieces of background out of the way:

  1. Red Hat believes that the future IT architecture for Infrastructure as a Service (IaaS) is OpenStack. That shouldn’t come as a big surprise given that Red Hat was a major contributor to the Grizzly OpenStack Release and has established a community for it’s distribution called RDO.
  2. There is a big difference between datacenter virtualization and clouds and knowing which workloads should run on which is important. For more information on this you can watch Andy Cathrow’s talk at Red Hat Summit.

Two of the enhancements coming in the next release of CloudForms are the clear distinction between datacenter virtualization and cloud providers and the addition of OpenStack as a supported cloud provider.

In clearly separating and understanding the differences between datacenter virtualization (or infrastructure providers as it’s called in the user interface) and cloud providers CloudForms will understand exactly how to operationally manage and standardize operational concepts across Red Hat Enterprise Virtualization, VMware vSphere, Amazon EC2, and OpenStack.

Cloud Providers

CloudProviders

Infrastructure (Datacenter Virtualization) Providers

InfraProviders

Also, as you noticed in the previous screens CloudForms will support OpenStack as a cloud provider. This is critical to snapping in another piece of the puzzle of Red Hat’s Open Hybrid Cloud Architecture and providing all the operational management capabilities to OpenStack that IT organizations need.

OpenStack Cloud Provider

OpenStackProvider

These two enhancements will be critical for organizations who want a single pane of glass to operationally manage their Open Hybrid Cloud.

Single Pane Operational Management of RHEV, vSphere, AWS EC2, and OpenStack

SinglePane

Stay tuned for more updates regarding the next release of CloudForms!

Accelerating Service Delivery While Avoiding Silos

In a prior post on Red Hat’s Open Hybrid Cloud Architecture I discussed how IT consumers, having experienced the power of the public cloud are pressing Enterprise IT to deliver new capabilities. One of these capabilities is accelerated service delivery, or the ability to more quickly develop and release new applications that meet a businesses need. In this post I’d like to examine how the Open Hybrid Cloud Architecture provides the means to satisfy this capability and how it is different then other approaches.

There are 1000 vendors who can provide accelerated service delivery, why not just buy a product?
Many vendors will try to sell a single product as being able to accelerate service delivery. The problem with this approach is that accelerating service delivery goes far beyond a single product. This is because no single product can provide all the necessary components of application development that an IT consumer could want. Think about all the languages, frameworks, and technologies from Java, .NET, node.js to Hadoop, Casandra, Mongo to <insert your favorite technology name here>. The availability of these languages from a single product, vendor, or operating system in an optimized manner is highly unlikely. An approach that tries to accelerate service delivery within a single product or technology creates yet another silo and doesn’t solve the fundamental problem of accelerating service delivery across all an IT organization’s assets.

How can Enterprise IT provide accelerated service delivery capabilities while avoiding a silo?
By leveraging an architecture that is flexible and where each component is aware of it’s neighbors, organizations can accelerate service delivery without building a silo. Even better, having a component within your architecture that has a comprehensive understanding of every other component means virtually endless possibility for workload deployment and management. Want to deploy your workload as a VM using PXE on Red Hat Enterprise Virtualization, a template within VMWare vSphere, instances on OpenStack using Heat, or a gear in OpenShift? You can only do that if you understand each one of those technologies. Don’t build your logic for operations management into a single layer – keep it abstracted to ensure you can plug in whichever implementation of IaaS and PaaS best meets your needs. Does your application maintain too much state locally or scale vertically? Then it belongs on a traditional virtualization platform like VMware or RHEV. Is it a stateless scale out application? Then you can deploy on OpenStack. Are the languages and other dependencies available within a PaaS? Then it belongs in OpenShift. However, just deploying to each of those platforms is not enough.  What about deploying one part of your workload as gears in OpenShift and another part as instances on OpenStack at the same time? You must be able to deploy to ALL platforms within the same workload definition! The Open Hybrid Cloud Architecture is providing the foundation for such flexibility in deployment and management of workloads in the cloud.

Can you provide an example?
Let’s look at an example of a developer who would like to develop a new application for the finance team within his organization. The developer would like to utilize ruby as a web front end and utilize .NET within an IIS application server to perform some other functions. This developer expects the same capabilities that he gets using Google App Engine in that he wants to be able to push code and have it running in seconds. The user wants to request a catalog item from CloudForms which will provide them with the two components. The first is a ruby application running in the OpenShift PaaS. The second is a virtual machine running on either Red Hat Enterprise Virtualization, VMware vSphere, or Red Hat Open Stack. The service designer who designed this catalog bundle recognized that ruby applications can run in OpenShift and because OpenShift provides greater efficiencies for hosting applications then running the application within it’s own virtual machine the designer ensured that the component run in the PaaS layer. OpenShift also provides automation of the software development process which will give the end user of the designed service greater velocity in development. Since the IIS application server wasn’t available within the PaaS layer, the service designer utilized a virtual machine at the datacenter virtualization layer (vSphere) to provide this capability.

Step by Step
diagram01

1. The user requests the catalog item. CloudForms could optionally provide workflow (approval, quota, etc) and best fit placement at this point.

2. CloudForms provisions the ruby application in OpenShift Enterprise. The Ruby application is running as a gear.

3. CloudForms orchestrates the adding of an action hook into the OpenShift deployment. This can be done using any configuration management utility. I used puppet and The Foreman in my demo video below.

4. The user begins developing their ruby application. They clone the repository and then commit and push the changes.

5. The action hook within OpenShift is triggered by the deploy stage of the OpenShift lifecycle and calls CloudForms API requesting a virtual machine be created.

6. CloudForms provisions the virtual machine.

This is really just the beginning of the process, but hopefully you can see where it’s going. CloudForms can perform the deployment and tear down of the virtual machines each time a developer updates their application in OpenShift. It can even tie into other continuous integration systems to deploy application code into the IIS application server. This rapid delivery of the environment is taking place across both the PaaS and IaaS. It also doesn’t try to invent a new “standard description” across all different types of models, instead it understands the models and methods of automation within each component of the architecture and orchestrates them. While the virtual machines running at the IaaS layer don’t provide the same level of density as the PaaS, CloudForms and OpenShift can be combined to provide similar operational efficiency and expand the capabilities of OpenShift’s Accelerated Service Delivery across an IT organizations entire base of assets.

I still don’t believe you, can you show me?
Want to see it in action? Check out this short video demonstration in either Ogg or Quicktime format.

You can download the action hook here.

You can download the OpenOffice Draw Diagram here.

This is cool, what would be even cooler?
If the client tools could be intercepted by CloudForms it could provide a lot of operational management capabilities to OpenShift. For example, when `rhc app create` is run CloudForms could provide approvals, workflow, quota to the OpenShift applications. Or perhaps a future command such as `rhc app promote` could utilize the approvals and automation engine inside CloudForms to provide controlled promotions of applications through a change control process.

Red Hat’s Open Hybrid Cloud Architecture

IT consumers traditionally satisfied their requirements for services by utilizing their internal IT departments. The type of service consumed has evolved over time. Most recently consumption is dominated by the service of virtual machines. More advanced internal IT departments may include even more service oriented consumption to IT consumers in the form of standardized application stacks running on top of virtual machines. The process of procuring such services could take days, weeks, or even months for an internal IT department. Length of procurement can be attributed to the complex architectures as well as business requirements, such as governance and compliance, that are required to be followed by IT organizations.

02

In the search to innovate faster, IT consumers have begun to recognize the value of public clouds to more quickly provide the services they need. Whether it is Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), IT consumers began to utilize these public cloud providers. IT consumers enjoyed increased agility and a consumption model that allowed them to utilize computing as a utility. While using public cloud providers is appropriate for certain workloads, IT organizations have struggled to maintain compliance, governance, and control over businesses critical assets in the public cloud. At the same time, IT consumers expectations of what IT organization should provide have dramatically increased.

03

The increased expectations of the IT consumer are being transferred to the IT organization in the form of increased demands. Increased demand for self-service, elastic infrastructure and applications, the ability to more rapidly deliver environments, and accelerated application development are some of the specific demands being driven by the experience the IT consumer has had while using the public cloud. The IT consumer is losing patience with IT organizations and the threat of shadow IT organizations is real. IT organizations would like to deliver these capabilities to the IT consumer and would like to maintain their operational practices over the delivery of such capabilities. IT organizations also recognize that the shift to a next generation IT architecture is an opportunity to make strategic decisions to both simplify their IT architecture and address concerns that have been plaguing them in the architectures of the past. These strategic decisions include embracing an architecture that provides choice, agility, openness, and leverages existing investments.

04

Choice is important to Operations and Developers
Operations teams need the ability to deploy workloads on a choice of infrastructure providers with the ability to seamlessly manage workloads once deployed. Without the ability to easily deploy and move workloads from one infrastructure provider to another the operations teams are stuck using a single infrastructure provider. Being locked in to a single infrastructure provider prohibits operations teams from leveraging innovation from other providers or choosing the right provider for the right workload. Development teams also require choice. A broad choice of languages and frameworks and support for polyglot, poly-framework applications is an expectation of development teams because each language and framework is providing important innovations that can be assembled to solve complex business problems efficiently in a way that a single language alone cannot solve.

05
Agility and Openness are critical to maintaining relevance with the IT consumer

Agility will allow IT organizations to remain relevant with the IT consumer. By being able to quickly provide new languages, frameworks, and solutions to complex problems IT operations can become a strategic partner to the IT consumer instead of being viewed as simply an expense. In choosing a next generation IT architecture that is based on openness, IT organizations can ensure that future innovation can be more easily adopted, and ensure that future investments are more easily consumable then today’s architectures.
Leverage existing investments alongside a Next Generation Architecture

IT organizations have invested heavily in the current IT architectures and the next generation IT architecture needs to leverage those existing investments. Meanwhile, IT consumers are requesting specific capabilities from IT organizations as a result of their experience with public cloud providers that are not available in current IT architectures.

Red Hat’s Open Hybrid Cloud Architecture provides these capabilities today while balancing the strategic requirements IT organizations need in their next generation IT architecture. It all starts with a federated, highly scalable, and extensible operational management platform for cloud which provides discovery, capacity planning, reporting, audit and compliance, analytics, monitoring, orchestration, policy, and chargeback functionality. These capabilities are extended throughout all aspects of the Open Hybrid Cloud Architecture to provide a unified approach to management through a single pane of glass.

06

Within the infrastructure layer existing investments in physical systems and datacenter virtualization platforms can be unified with the next generation IT architectures of IaaS private and public clouds. Existing investments in application architectures can be managed in their existing environments through a single pane of glass which also provides insight into next generation IT architectures of private and public PaaS platforms.

07

The Open Hybrid Cloud Architecture’s operational management platform goes beyond a remedial understanding of deploying workloads to providers. The operational management platform is extended to provide deep levels of integration with automation frameworks in both the infrastructure and application layers. By leveraging these automation frameworks The Open Hybrid Cloud Architecture allows for new levels of flexibility and efficiency in workload placement and analysis. The approach of deep integration of loosely coupled systems forms the basis by which IT organizations can provide the IT consumer with the capabilities they have come to expect through their use of public clouds without building a cloud silo.

08

Elastic Infrastructure

Red Hat’s Open Hybrid Cloud Architecture provides elastic infrastructure via it’s Infrastructure as a Service (IaaS) component and related Infrastructure Automation capabilities. The Open Hybrid Cloud Architecture not only provides elastic infrastructure via IaaS, but also provides consistent management across a broad range of other infrastructure including physical systems, datacenter virtualization, and IaaS public clouds. This allows IT organizations to leverage the benefits of cloud computing across their existing investments and provides a single pane of glass view of their resources. This comprehensive view of all computing resources provides the information IT organizations need to optimize workload placement. For example, with capacity and utilization data from workloads running in datacenter virtualization platforms IT organizations can determine which workloads are the best targets for moving to IaaS clouds, both private and public. Without a comprehensive view of all computing resources elastic infrastructure based on IaaS alone is yet another silo of management for IT organizations.

Elastic Applications
The benefits of cloud economics cannot be realized through elastic infrastructure alone, but require applications and application platforms. Next generation applications must be designed with the core tenants of cloud computing in mind in order to take advantage of underlying elastic infrastructure. By allowing users to develop elastic applications that expand and contract based on user demand Red Hat’s Open Hybrid Cloud Architecture provides a Platform as a Service (PaaS) component that allows IT organizations to recognize the full benefit of cloud economics.

09

Self-Service

Red Hat’s Open Hybrid Cloud Architecture provides a single self-service portal that allows application designers to publish services that span multiple cloud service models to catalogs for consumption. This unique capability is made possible by rich automation and workflow engines within Red Hat’s cloud management platform and open APIs within Red Hat’s datacenter virtualization, Infrastructure as a Service (IaaS), and Platform as a Service (PaaS) components. Once published to a catalog, users can deploy complex applications in an easy to use browser based interface and begin working immediately. IT organizations can leverage automation and workflow capabilities combined with capacity and utilization data to intelligently place workloads on the resources best suited based on performance, capacity, or security requirements. Finally, the Open Hybrid Cloud Architecture provides the ability for IT organizations to perform showback and chargeback across both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) platforms through a single pane of glass. This suits the IT consumers preference of utility consumption they have grown accustomed to when using public cloud providers.

10

Accelerated Application Development

The Open Hybrid Cloud Architecture allows for faster application development by providing automation at both the application and infrastructure layers to ensure that accelerated application development can be realized throughout IT organizations entire base of investments. Without a solid understanding of both the application and infrastructure layers the benefits of accelerated application development are limited to the development paradigms in a single layer. Furthermore, without support for heterogeneity within both infrastructure and application layers choice is limited. In allowing for a broad choice of applications and frameworks and a broad choice of infrastructure providers to run those applications upon IT organizations have an increased amount of choice leading to lower costs, better performance, and competitive advantages. With a unified understanding of both applications and infrastructure changes made to a service during development can be captured and integrated into existing change management systems. This combination of automation and control at all layers and across heterogeneous infrastructure and applications provides accelerated application development throughout all resources within IT organizations.

11

Rapid Environment Delivery

Delivery of environments to IT consumers and the development teams with IT operations is critical to accelerating application development. Without a holistic understanding of both the application lifecycle and the underlying infrastructure delivery of environments will be inefficient or slow. For example, if the orchestration and provisioning of environments understands only the application lifecycle concepts and lacks the understanding of underlying infrastructure then the use of infrastructure would not be optimized. Placement of applications on platforms that offer the best cost, performance, or security attributes would not be possible. Similarly, if the orchestration and provisioning of environments understands only the infrastructure concepts then it would not be able to automate the application lifecycle leading to incomplete environments. The Open Hybrid Cloud Architecture’s provisioning and orchestration of environments understands the concepts of application lifecycle management and the underlying infrastructure. This provides end users of the environment with a elevated user experience while simultaneously giving operations teams maximum efficiency for hosting applications. With a firm understanding of both applications and infrastructure the architecture allows for flexible and continuous best fit placement for applications in various deployment models. Running certain parts of an application in a Platform as a Service (PaaS) and others in virtual machines within the Infrastructure as a Service (IaaS) while still realizing the benefits, such as rapid elasticity, of the highest order cloud model of PaaS can be realized.

The Open Hybrid Cloud Architecture

Red Hat’s Open Hybrid Cloud Architecture provides the capabilities IT consumers and IT organizations want with the strategic characteristics they need. By delivering the capabilities of Self-Service, Elastic Applications and Infrastructure, Accelerated Application Development, and Rapid Environment Delivery IT organizations can meet the rising expectation of IT consumers. At the same time, the Open Hybrid Cloud Architecture meets the strategic needs of choice, agility, and openness. The architecture also allows IT organizations to leverage their existing investments and provides a evolutionary approach to the Open Hybrid Cloud Architecture.

Where to begin
Each IT organization is different, but there are some actions IT organization can take to get started on the journey towards an Open Hybrid Cloud Architecture. By understanding all it’s assets and the capacity and utilization metrics for them, IT organizations can better understand what components of the Open Hybrid Cloud Architecture will yield the most benefit. If asset and capacity and utilization metrics are well understood a plan which uses a phased approach to implement the components of the Open Hybrid Cloud Architecture can be created.

Download the OpenOffice Draw File used in the diagrams here

CloudForms and The Foreman Demonstration

The Foreman is a complete lifecycle management tool for physical and virtual servers. In other words, all that DHCP/DNS/Configuration Management/etc that is required by an operating system to function properly – The Foreman handles it, and handles it well. It also has an architecture which utilizes Smart-Proxies – which provides a restful API to underlying subsystems for distributed environments. While The Foreman understands some concepts within infrastructure management it is primarily focused on virtual and physical machine provisioning and management and managing the operating systems contained within those machines once they are deployed. Provisioning and configuration management are two areas that are very important to enterprise customers as it reduces the cost of operating while simultaneously reducing risk, specifically in the dynamic world of cloud computing.

The combination of provisioning and configuration management The Foreman provides is compelling, because it uses standard technologies that have existed for years and provides robust federation of those technologies. However, many IT organizations may already have a provisioning system or configuration management engine in place. Enterprises need a way of continuing to leverage their existing investment while planning their next generation IT architectures. CloudForms can assist with the adoption of The Foreman in existing IT architectures and also lays the groundwork for exciting new possibilities in streamlining application delivery across heterogeneous infrastructure.

CloudForms provides discovery, monitoring, eventing, control policies, chargeback, catalog based self-service – all of which are important. It also abstracts various provisioning methods for infrastructure and is not bound to a single configuration management system.

For this reason, and as you might suspect from the title of this post, it is only natural that CloudForms and The Foreman should compliment each other. Here are a few ways in which The Foreman and CloudForms can compliment each other.

  1. CloudForms can assist with the discovery and import of brown-field environments into The Foreman.
  2. CloudForms can allow users to leverage different provisioning systems while using The Foreman for configuration management.
  3. CloudForms can promote systems between environments (dev/test/prod) in The Foreman based on the data it contains or by integrating with external systems (ticketing, change control, capacity and utilization data).
  4. The Foreman can provide facts about systems to CloudForms for reporting and use in control policies, providing greater insight with less overhead.

These are just a few ideas, there are many more useful scenarios. One other scenario that may be possible soon – An application developer implements a change in a PaaS application and a change in a Virtual Machine (VM) within a IaaS provider in a development environment. Perhaps this developer needed to use a virtual machine running IIS which the PaaS doesn’t yet support.  The PaaS event (a source control check-in) and the drift detection provided by The Foreman can be correlated by CloudForms and a workflow can be initiated to re-provision the PaaS application and corresponding virtual machine to a continuous testing environment for analysis while taking into account cost, performance, and security requirements. The pieces are coming together to make this scenario a reality.

Here is a quick demonstration of how these two systems can work together. High-Res Quicktime Format

One final note: Since The Foreman will be included in a future version of Red Hat Satellite, it is likely that the integration between CloudForms and The Foreman will only improve over time.

Auto Scaling OpenShift Enterprise Infrastructure with CloudForms Management Engine

OpenShift Enterprise, Red Hat’s Platform as a Service (PaaS), handles the management of application stacks so developers can focus on writing code. The result is faster delivery of services to organizations. OpenShift Enterprise runs on infrastructure, and that infrastructure needs to be both provisioned and managed. While provisioning OpenShift Enterprise is relatively straightforward, managing the lifecycle of the OpenShift Enterprise deployment requires the same considerations as other enterprise applications such as updates and configuration management. Moreover, while OpenShift Enterprise can scale applications running within the PaaS based on demand the OpenShift Enterprise infrastructure itself is static and unaware of the underlying infrastructure. This is by design, as the mission of the PaaS is to automate the management of application stacks and it would limit flexibility to tightly couple the PaaS with the compute resources at both the physical and virtual layer. While this architectural decision is justified given the wide array of computing platforms that OpenShift Enterprise can be deployed upon (any that Red Hat Enterprise Linux can run upon) many organizations would like to not only dynamically scale their applications running in the PaaS, but dynamically scale the infrastructure supporting the PaaS itself. Organizations that are interested in scaling infrastructure in support of OpenShift Enterprise need not look further then CloudForms, Red Hat’s Open Hybrid Cloud Management Framework. CloudForms provides the capabilities to provision, manage, and scale OpenShift Enterprise’s infrastructure automatically based on policy.

For reference, the two previous posts I authored covered deploying the OpenShift Enterprise Infrastructure via CloudForms and deploying OpenShift Enterprise Applications (along with IaaS elements such as Virtual Machines) via CloudForms. Below are two screenshots of what this looks like for background.

image01

Operations User Deploying OpenShift Enterprise Infrastructure via CloudForms

image02

Self-Service User Deploying OpenShift Application via CloudForms

Let’s examine how these two automations can be combined to provide auto scaling of infrastructure to meet the demands of a PaaS. Today, most IT organizations monitor applications and respond to notifications after the event has already taken place – particularly when it comes to demand upon a particular application or service. There are a number of reasons for this approach, one of which is a result of the historical “build to spec” systems that existed in historical and currently designed application architectures. As organizations transition to developing new applications on a PaaS, however, they are presented with an opportunity to reevaluate the static and often oversubscribed nature of their IT infrastructure. In short, while applications designed in the past were not [often] built to scale dynamically based on demand, the majority of new applications are, and this trend is accelerating. Inline with this accelerating trend the infrastructure underlying these new expectations must support this new requirement or much of the business value of dynamic scalability will not be realized. You could say that an organizations dynamic scalability is bounded by their least scalable layer. This also holds true for organizations that intend to run solely on a public cloud and will leverage any resources at the IaaS layer.

Here is an example of how scalability of a PaaS would currently be handled in many IT organizations.

diagram03

The operations user is alerted by a monitoring tool that the PaaS has run out of capacity to host new or scale existing applications.

diagram04

The operations user utilizes the IaaS manager to provision new resources (Virtual Machines) for the PaaS.

diagram05

The operations user manually configures the new resources for consumption by the PaaS.

Utilizing CloudForms to deploy manage, and automatically scale OpenShift Enterprise alleviates the risk of manual configuration from the operations user while dynamically reclaiming unused capacity within the infrastructure. It also reduces the cost and complexity of maintaining a separate monitoring solution and IaaS manager. This translates to lower costs, greater uptime, and the ability to serve more end users. Here is how the process changes.

diagram06

By notification from the PaaS platform or in monitoring the infrastructure for specific conditions CloudForms detects that the PaaS Infrastructure is reaching its capacity. Thresholds can be defined by a wide array of metrics already available within CloudForms, such as aggregate memory utilized, disk usage, or CPU utilization.

diagram07

CloudForms examines conditions defined by the organization to determine whether or not the PaaS should receive more resources. In this case, it allows the PaaS to have more resources and provisions a new virtual machine to act as an OpenShift Node. At this point CloudForms could require approval of the scaling event before moving forward. The operations user or a third party system can receive an alert or event, but this is informational and not a request for the admin to perform any manual actions.

diagram08

Upon deploying the new virtual machine CloudForms configures it appropriately. This could mean installing the VM from a provisioning system or utilizing a pre-defined template and registering it to a configuration management system such as one based on puppet or chef that configure the system.

Want to see  a prototype in action? Check out the screencast I’ve recorded.

This same problem (the ability to dynamically scale a platform) exists between the IaaS and physical layer. If the IaaS layer runs out of resources it is often not aware of the physical resources available for it to consume. This problem is not found in a large number of organizations because dynamically re-purposing physical hardware has a smaller and perhaps more specialized set of use cases (think HPC, grid, deterministic workloads). Even though this is the case it should be noted that CloudForms is able to provide a similar level of policy based automation to physical hardware to extend the capacity of the IaaS layer if required.

Follow

Get every new post delivered to your Inbox.

Join 719 other followers