Category Archives: IaaS

OpenStack Packstack Installation with External Connectivity

Packstack makes installing OpenStack REALLY easy. By using the –allinone option you could have a working self-contained RDO installation in minutes (and most of those minutes are spent waiting for packages to install). However, the –allinone option really should be renamed to the –onlywithinone today, because while it makes the installation very simple it doesn’t allow for instances spun up on the resulting OpenStack environment to be reachable from external systems. This can be a problem if you are trying to both bring up an OpenStack environment quickly and demonstrate integration with systems outside of OpenStack. With a lot of help and education from Perry Myers and Terry Wilson on Red Hat’s RDO team I was able to make a few modifications to the packstack installation to allow a user to use the packstack installation with –allinone and have external access to the instances launched on the host. While I’m not sure this is the best practice for setup here is how it works.

I started with a @base kickstart installation of Red Hat Enterprise Linux 6.4. First, I subscribed the system via subscription manager and subscribed to the rhel server repository. I also installed the latest RDO repository file for Grizzly and then updated the system and installed openvswitch. The update will install a new kernel.

# subscription-manager register
...
# subscription-manager list --available |egrep -i 'pool|name'
...
# subscription-manager attach --pool=YOURPOOLIDHERE
...
# rpm -ivh http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly.rpm
...
# yum -y install openvswitch
...
# yum -y update

Before I rebooted I setup a bridge named br-ex by placing the following in /etc/sysconfig/network-scripts/ifcfg-br-ex.

DEVICE=br-ex
OVSBOOTPROTO=dhcp
OVSDHCPINTERFACES=eth0
NM_CONTROLLED=no
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=ovs

I also changed the setup of the eth0 interface by placing the following in /etc/sysconfig/network-scripts/ifcfg-eth0. The configuration would make it belong to the bridge we previously setup.

DEVICE="eth0"
HWADDR="..."
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
UUID="..."
ONBOOT=yes
NM_CONTROLLED=no

At this point I rebooted the system so the updated kernel could be used. When it comes back up you should have a bridged interface named br-ex which has the IP address that was associated with eth0. I had a static leased DHCP entry for eth0 prior to starting, so even though the interface was set to use DHCP as it’s bootproto it receives the same address consistently.

Now you need to install packstack.

# yum -y install openstack-packstack

Packstack’s installation accepts an argument named quantum-l3-ext-bridge.

–quantum-l3-ext-bridge=QUANTUM_L3_EXT_BRIDGE
The name of the bridge that the Quantum L3 agent will
use for external traffic, or ‘provider’ if using
provider networks

We will set this to eth0 so that the eth0 interface is used for external traffic. Remember, eth0 will be a port on br-ex in openvswitch, so it will be able to talk to the outside world through it.

Before we run the packstack installer though, we need to make another change. Packstack’s –allinone installation uses some puppet templates to provide answers to the installation options. It’s possible to override the options if there is a command line switch, but packstack doesn’t accept arguments for everything. For example, if you want to change the floating IP range to fall in line with the network range your eth0 interface supports then you’ll need to edit a puppet template by hand.

Edit /usr/lib/python2.6/site-packages/packstack/puppet/modules/openstack/manifests/provision.pp and change $floating_range to a range that is suitable for the network eth0 is on. The floating range variable appears to be used for assigning the floating IP address pool ranges by packstack when –allinone is used.

One last modification before we run packstack, and thanks to Terry Wilson for pointing this out, we need to remove a a firewall rule that is added during the packstack run that adds a NAT rule which will effectively block inbound traffic to a launched instance. You can edit /usr/lib/python2.6/site-packages/packstack/puppet/templates/provision.pp and comment out the following lines.

firewall { '000 nat':
  chain  => 'POSTROUTING',
  jump   => 'MASQUERADE',
  source => $::openstack:rovision::floating_range,
  outiface => $::gateway_device,
  table => 'nat',
  proto => 'all',
}

The ability to configure these via packstack arguments should eventually make it’s way into packstack. See this Bugzilla for more information.

That’s it, now you can fire up packstack by running the following command.

packstack --allinone —quantum-l3-ext-bridge=eth0

When it completes it will tell you that you need to reboot for the new kernel to take effect, but you don’t need to since we already updated after running yum update with the RDO repository in place.

Your openvswitch configuration should look roughly like this when packstack finishes running.

# ovs-vsctl show
08ad9137-5eae-4367-8c3e-52f8b87e5415
    Bridge br-int
        Port "tap46aaff1f-cd"
            tag: 1
            Interface "tap46aaff1f-cd"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qvod54d32dc-0b"
            tag: 1
            Interface "qvod54d32dc-0b"
        Port "qr-0638766f-76"
            tag: 1
            Interface "qr-0638766f-76"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-3f967843-48"
            Interface "qg-3f967843-48"
                type: internal
        Port "eth0"
            Interface "eth0"
    ovs_version: "1.11.0"

Before we start provisioning instances in Horizon let’s take care of one last step and add two security group rules to allow ssh and icmp to our instances.

# . ~/keystonerc_demo 
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

Now you can log into horizon with the demo user whose credentials are stored in /root/keystonerc_demo and provision an instance. Make sure you specify the private network for this instance. The private network is automatically created for the demo tenant by the packstack –allinone installation. You’ll also notice it uploaded an image named cirros into glance for you. Of course, this assumes you’ve already created a keypair.

Screen Shot 2013-08-23 at 10.36.44 PM

Screen Shot 2013-08-23 at 10.36.55 PM

Once the instance is launched we will then associate a floating IP address with it.

Screen Shot 2013-08-23 at 10.44.37 PM

Screen Shot 2013-08-23 at 10.44.50 PM

Screen Shot 2013-08-23 at 10.45.00 PM

Now we can ssh to it from outside the

$ ssh cirros@10.16.132.4
cirros@10.16.132.4's password: 
$ uptime
 00:52:20 up 14 min,  1 users,  load average: 0.00, 0.00, 0.00
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Now we can get started with the fun stuff, like provisioning images from CloudForms onto RDO and using Foreman to automatically configure them!

Building the Bridge Between Present and Future IT Architectures

Life isn’t easy for IT organizations today. They find themselves on the receiving end of demands for new capabilities that public cloud providers are delivering at increasing speed. While solutions within the datacenter are beginning to deliver these same capabilities in the private datacenter the IT organization doesn’t want to build yet another silo. Red Hat’s Open Hybrid Cloud Architecture is helping IT organizations adopt next generation IT architectures to meet the increasing demands for public cloud capability while helping them establish a common framework for all their IT assets. This approach provides a lot of benefits across all IT architectures. To name a few:

  • Discovery and Reporting: Detailed information about all workloads across all cloud and virtualization providers.
  • Self-Service: A single catalog which could provision services across hybrid and heterogeneous public and private clouds.
  • Best-Fit Placement: Helping identify which platform is best for which workload both at provision and run-time.

The engineers at Red Hat have been hard at work on the next release of CloudForms which is scheduled for General Availability later this year. I’ve been lucky enough to get my hands on a very early preview and wanted to share an update on two enhancements that are relevant to the topic of bridging present and future IT architectures. Before I dive into the enhancements let me get two pieces of background out of the way:

  1. Red Hat believes that the future IT architecture for Infrastructure as a Service (IaaS) is OpenStack. That shouldn’t come as a big surprise given that Red Hat was a major contributor to the Grizzly OpenStack Release and has established a community for it’s distribution called RDO.
  2. There is a big difference between datacenter virtualization and clouds and knowing which workloads should run on which is important. For more information on this you can watch Andy Cathrow’s talk at Red Hat Summit.

Two of the enhancements coming in the next release of CloudForms are the clear distinction between datacenter virtualization and cloud providers and the addition of OpenStack as a supported cloud provider.

In clearly separating and understanding the differences between datacenter virtualization (or infrastructure providers as it’s called in the user interface) and cloud providers CloudForms will understand exactly how to operationally manage and standardize operational concepts across Red Hat Enterprise Virtualization, VMware vSphere, Amazon EC2, and OpenStack.

Cloud Providers

CloudProviders

Infrastructure (Datacenter Virtualization) Providers

InfraProviders

Also, as you noticed in the previous screens CloudForms will support OpenStack as a cloud provider. This is critical to snapping in another piece of the puzzle of Red Hat’s Open Hybrid Cloud Architecture and providing all the operational management capabilities to OpenStack that IT organizations need.

OpenStack Cloud Provider

OpenStackProvider

These two enhancements will be critical for organizations who want a single pane of glass to operationally manage their Open Hybrid Cloud.

Single Pane Operational Management of RHEV, vSphere, AWS EC2, and OpenStack

SinglePane

Stay tuned for more updates regarding the next release of CloudForms!

Accelerating Service Delivery While Avoiding Silos

In a prior post on Red Hat’s Open Hybrid Cloud Architecture I discussed how IT consumers, having experienced the power of the public cloud are pressing Enterprise IT to deliver new capabilities. One of these capabilities is accelerated service delivery, or the ability to more quickly develop and release new applications that meet a businesses need. In this post I’d like to examine how the Open Hybrid Cloud Architecture provides the means to satisfy this capability and how it is different then other approaches.

There are 1000 vendors who can provide accelerated service delivery, why not just buy a product?
Many vendors will try to sell a single product as being able to accelerate service delivery. The problem with this approach is that accelerating service delivery goes far beyond a single product. This is because no single product can provide all the necessary components of application development that an IT consumer could want. Think about all the languages, frameworks, and technologies from Java, .NET, node.js to Hadoop, Casandra, Mongo to <insert your favorite technology name here>. The availability of these languages from a single product, vendor, or operating system in an optimized manner is highly unlikely. An approach that tries to accelerate service delivery within a single product or technology creates yet another silo and doesn’t solve the fundamental problem of accelerating service delivery across all an IT organization’s assets.

How can Enterprise IT provide accelerated service delivery capabilities while avoiding a silo?
By leveraging an architecture that is flexible and where each component is aware of it’s neighbors, organizations can accelerate service delivery without building a silo. Even better, having a component within your architecture that has a comprehensive understanding of every other component means virtually endless possibility for workload deployment and management. Want to deploy your workload as a VM using PXE on Red Hat Enterprise Virtualization, a template within VMWare vSphere, instances on OpenStack using Heat, or a gear in OpenShift? You can only do that if you understand each one of those technologies. Don’t build your logic for operations management into a single layer – keep it abstracted to ensure you can plug in whichever implementation of IaaS and PaaS best meets your needs. Does your application maintain too much state locally or scale vertically? Then it belongs on a traditional virtualization platform like VMware or RHEV. Is it a stateless scale out application? Then you can deploy on OpenStack. Are the languages and other dependencies available within a PaaS? Then it belongs in OpenShift. However, just deploying to each of those platforms is not enough.  What about deploying one part of your workload as gears in OpenShift and another part as instances on OpenStack at the same time? You must be able to deploy to ALL platforms within the same workload definition! The Open Hybrid Cloud Architecture is providing the foundation for such flexibility in deployment and management of workloads in the cloud.

Can you provide an example?
Let’s look at an example of a developer who would like to develop a new application for the finance team within his organization. The developer would like to utilize ruby as a web front end and utilize .NET within an IIS application server to perform some other functions. This developer expects the same capabilities that he gets using Google App Engine in that he wants to be able to push code and have it running in seconds. The user wants to request a catalog item from CloudForms which will provide them with the two components. The first is a ruby application running in the OpenShift PaaS. The second is a virtual machine running on either Red Hat Enterprise Virtualization, VMware vSphere, or Red Hat Open Stack. The service designer who designed this catalog bundle recognized that ruby applications can run in OpenShift and because OpenShift provides greater efficiencies for hosting applications then running the application within it’s own virtual machine the designer ensured that the component run in the PaaS layer. OpenShift also provides automation of the software development process which will give the end user of the designed service greater velocity in development. Since the IIS application server wasn’t available within the PaaS layer, the service designer utilized a virtual machine at the datacenter virtualization layer (vSphere) to provide this capability.

Step by Step
diagram01

1. The user requests the catalog item. CloudForms could optionally provide workflow (approval, quota, etc) and best fit placement at this point.

2. CloudForms provisions the ruby application in OpenShift Enterprise. The Ruby application is running as a gear.

3. CloudForms orchestrates the adding of an action hook into the OpenShift deployment. This can be done using any configuration management utility. I used puppet and The Foreman in my demo video below.

4. The user begins developing their ruby application. They clone the repository and then commit and push the changes.

5. The action hook within OpenShift is triggered by the deploy stage of the OpenShift lifecycle and calls CloudForms API requesting a virtual machine be created.

6. CloudForms provisions the virtual machine.

This is really just the beginning of the process, but hopefully you can see where it’s going. CloudForms can perform the deployment and tear down of the virtual machines each time a developer updates their application in OpenShift. It can even tie into other continuous integration systems to deploy application code into the IIS application server. This rapid delivery of the environment is taking place across both the PaaS and IaaS. It also doesn’t try to invent a new “standard description” across all different types of models, instead it understands the models and methods of automation within each component of the architecture and orchestrates them. While the virtual machines running at the IaaS layer don’t provide the same level of density as the PaaS, CloudForms and OpenShift can be combined to provide similar operational efficiency and expand the capabilities of OpenShift’s Accelerated Service Delivery across an IT organizations entire base of assets.

I still don’t believe you, can you show me?
Want to see it in action? Check out this short video demonstration in either Ogg or Quicktime format.

You can download the action hook here.

You can download the OpenOffice Draw Diagram here.

This is cool, what would be even cooler?
If the client tools could be intercepted by CloudForms it could provide a lot of operational management capabilities to OpenShift. For example, when `rhc app create` is run CloudForms could provide approvals, workflow, quota to the OpenShift applications. Or perhaps a future command such as `rhc app promote` could utilize the approvals and automation engine inside CloudForms to provide controlled promotions of applications through a change control process.

Auto Scaling OpenShift Enterprise Infrastructure with CloudForms Management Engine

OpenShift Enterprise, Red Hat’s Platform as a Service (PaaS), handles the management of application stacks so developers can focus on writing code. The result is faster delivery of services to organizations. OpenShift Enterprise runs on infrastructure, and that infrastructure needs to be both provisioned and managed. While provisioning OpenShift Enterprise is relatively straightforward, managing the lifecycle of the OpenShift Enterprise deployment requires the same considerations as other enterprise applications such as updates and configuration management. Moreover, while OpenShift Enterprise can scale applications running within the PaaS based on demand the OpenShift Enterprise infrastructure itself is static and unaware of the underlying infrastructure. This is by design, as the mission of the PaaS is to automate the management of application stacks and it would limit flexibility to tightly couple the PaaS with the compute resources at both the physical and virtual layer. While this architectural decision is justified given the wide array of computing platforms that OpenShift Enterprise can be deployed upon (any that Red Hat Enterprise Linux can run upon) many organizations would like to not only dynamically scale their applications running in the PaaS, but dynamically scale the infrastructure supporting the PaaS itself. Organizations that are interested in scaling infrastructure in support of OpenShift Enterprise need not look further then CloudForms, Red Hat’s Open Hybrid Cloud Management Framework. CloudForms provides the capabilities to provision, manage, and scale OpenShift Enterprise’s infrastructure automatically based on policy.

For reference, the two previous posts I authored covered deploying the OpenShift Enterprise Infrastructure via CloudForms and deploying OpenShift Enterprise Applications (along with IaaS elements such as Virtual Machines) via CloudForms. Below are two screenshots of what this looks like for background.

image01

Operations User Deploying OpenShift Enterprise Infrastructure via CloudForms

image02

Self-Service User Deploying OpenShift Application via CloudForms

Let’s examine how these two automations can be combined to provide auto scaling of infrastructure to meet the demands of a PaaS. Today, most IT organizations monitor applications and respond to notifications after the event has already taken place – particularly when it comes to demand upon a particular application or service. There are a number of reasons for this approach, one of which is a result of the historical “build to spec” systems that existed in historical and currently designed application architectures. As organizations transition to developing new applications on a PaaS, however, they are presented with an opportunity to reevaluate the static and often oversubscribed nature of their IT infrastructure. In short, while applications designed in the past were not [often] built to scale dynamically based on demand, the majority of new applications are, and this trend is accelerating. Inline with this accelerating trend the infrastructure underlying these new expectations must support this new requirement or much of the business value of dynamic scalability will not be realized. You could say that an organizations dynamic scalability is bounded by their least scalable layer. This also holds true for organizations that intend to run solely on a public cloud and will leverage any resources at the IaaS layer.

Here is an example of how scalability of a PaaS would currently be handled in many IT organizations.

diagram03

The operations user is alerted by a monitoring tool that the PaaS has run out of capacity to host new or scale existing applications.

diagram04

The operations user utilizes the IaaS manager to provision new resources (Virtual Machines) for the PaaS.

diagram05

The operations user manually configures the new resources for consumption by the PaaS.

Utilizing CloudForms to deploy manage, and automatically scale OpenShift Enterprise alleviates the risk of manual configuration from the operations user while dynamically reclaiming unused capacity within the infrastructure. It also reduces the cost and complexity of maintaining a separate monitoring solution and IaaS manager. This translates to lower costs, greater uptime, and the ability to serve more end users. Here is how the process changes.

diagram06

By notification from the PaaS platform or in monitoring the infrastructure for specific conditions CloudForms detects that the PaaS Infrastructure is reaching its capacity. Thresholds can be defined by a wide array of metrics already available within CloudForms, such as aggregate memory utilized, disk usage, or CPU utilization.

diagram07

CloudForms examines conditions defined by the organization to determine whether or not the PaaS should receive more resources. In this case, it allows the PaaS to have more resources and provisions a new virtual machine to act as an OpenShift Node. At this point CloudForms could require approval of the scaling event before moving forward. The operations user or a third party system can receive an alert or event, but this is informational and not a request for the admin to perform any manual actions.

diagram08

Upon deploying the new virtual machine CloudForms configures it appropriately. This could mean installing the VM from a provisioning system or utilizing a pre-defined template and registering it to a configuration management system such as one based on puppet or chef that configure the system.

Want to see  a prototype in action? Check out the screencast I’ve recorded.

This same problem (the ability to dynamically scale a platform) exists between the IaaS and physical layer. If the IaaS layer runs out of resources it is often not aware of the physical resources available for it to consume. This problem is not found in a large number of organizations because dynamically re-purposing physical hardware has a smaller and perhaps more specialized set of use cases (think HPC, grid, deterministic workloads). Even though this is the case it should be noted that CloudForms is able to provide a similar level of policy based automation to physical hardware to extend the capacity of the IaaS layer if required.

Hybrid Service Models: IaaS and PaaS Self-Service Converge

More and more organizations are beginning to embrace both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS).  These organizations  have already begun asking why PaaS and IaaS management facilities must use different management frameworks. It only seems natural that IT organization’s customers should be able to select both IaaS and PaaS elements during their self-service workflow. Likewise, operations teams within IT organizations prefer to be able to utilize the same methods of policy, control, and automation across both IaaS and PaaS elements. In doing so operations teams could optimize workload placement both inside and outside their datacenter and reduce duplication of effort. This isn’t just a realization that customers are coming to – analysts have also been talking about the convergence of IaaS and PaaS as a natural evolutionary step in cloud computing.

Converged IaaS and PaaS

Converged IaaS and PaaS

This convergence of IaaS and PaaS is something I referred to as a Hybrid Service Model in a previous post, but you may often hear it refereed to as Converged IaaS and PaaS. There are many detriments an IT organization that does not embrace the convergence of IaaS and PaaS will face. Some of the more notable detriments include the following.

  • Developers: Slower delivery of services
    • Developers accessing two self-service portals in which the portals do not have knowledge of each others capabilities leads to slower development and greater risk of human error due to less automated processes on workload provisioning and management.
  • Operations: Less efficient use of resources
    • Operations teams managing IaaS and PaaS with two management facilities will be unable to maximize utilization of resources.
  • Management: Loss of business value
    • IT managers will be unable to capitalize efficiently without an understanding of both IaaS and PaaS models.

For these reasons and many more, it’s imperative that organizations make decisions today that will lead them to the realization of a Hybrid Service Model. There are two approaches emerging in the industry to realizing a Hybrid Service Model. The first approach is to build a completely closed or semi-open solution to allowing for a Hybrid Service Model. A good example would be a vendor offering a PaaS as long as it runs on top of a single virtualization provider (conveniently sold by them). The second approach is one in which a technology company utilizes an approach based on the tenants of an Open Hybrid Cloud to provide a fully open solution to enabling a Hybrid Service Model. I won’t go into all the reasons the second approach is better – you can read about that more here and here – but I will mention that Red Hat is committed to the Open Hybrid Cloud approach to enabling a Hybrid Service Model.

With all the background information out of the way I’d like to show you a glimpse of what will be possible due to the Open Hybrid Cloud approach at Red Hat. Red Hat is building the foundation to offer customers Hybrid Service Models alongside Hybrid Deployment Scenarios. This is possible for many reasons, but in this scenario it is primarily because of the open APIs available in OpenShift, Red Hat’s PaaS and because of the extensibility of CloudForms, Red Hat’s Hybrid Cloud Management solution. The next release of CloudForms will include a Management Engine component, based on the acquisition of ManageIQ EVM that occurred in December. Using the CloudForms Management Engine it is possible to provide self-service of applications in a PaaS along with self-service of infrastructure in IaaS from a single catalog. Here is what a possible workflow would look like.

Higher resolution viewing in quicktime format here.

Self-Service OpenShift Enterprise Deployments with ManageIQ ECM

In the previous post I examined how Red Hat Network (RHN) Satellite could be integrated with ManageIQ Enterprise Cloud Management (ECM). With this integration in place Satellite could provide ECM with the content required to install an operating system into a virtual machine and close the loop in ongoing systems management. This was just a first look and there is a lot of work to be done to enable discovery of RHN Satellite and best practice automation out of the box via ECM. That said, the combination of ECM and RHN Satellite provide a solid foundation for proceeding to use cases higher in the stack.

With this in mind, I decided to attempt automating a self-service deployment of OpenShift using ManageIQ ECM, RHN Satellite, and puppet.

Lucky for me, much of the heavy lifting had already been done by Krishna Raman and others who developed puppet modules for installing OpenShift Origin. There were several hurdles that had to be overcome with the existing puppet modules for my use case:

  1. They were built for Fedora and OpenShift Origin and I am using RHEL6 with OpenShift Enterprise. Because of this they defaulted to using newer rubygems that weren’t available in openshift enterprise yet. It took a little time to reverse engineer the puppet modules to understand exactly what they were doing and tweak them for OpenShift Enterprise.
  2. The OpenShift Origin puppet module leveraged some other puppet modules (stdlib, for example), so the puppet module tool (PMT) was needed which is not available in core puppet until > 2.7. Of course, the only version of puppet available in EPEL for RHEL 6 was puppet-2.6. I pulled an internal build of puppet-2.7 to get around this, but still required some packages from EPEL to solve dependencies.

Other then that, I was able to reuse much of what already existed and deploy OpenShift Enterprise via ManageIQ ECM. How does it work? Very similar to the Satellite use case, but with the added step of deploying puppet and a puppet master onto the deployed virtual machine and executing the puppet modules.

workflow

Workflow of OpenShift Enterprise deployment via ECM

If you are curious how the puppet modules work, here is a diagram that illustrates the flow of the openshift puppet module.

Anatomy of OpenShift Puppet Module

Anatomy of OpenShift Puppet Module

Here is a screencast of the self-service deployment in action.

There are a lot of areas that can be improved in the future. Here are four which were top of mind after this exercise.

First, runtime parameters should be able to be passed to the deployment of virtual machines. These parameters should ultimately be part of a service that could be composed into a deployment. One idea would be to expose puppet classes as services that could be added to a deployment. For example, layering a service of openshift_broker onto a virtual machine would instantiate the openshift_broker class on that machine upon deployment. The parameters required for openshift_broker would then be exposed to the user if they would like to customize them.

Second, gears within OpenShift – the execution area for applications – should be able to be monitored from ECM much like Virtual Machines are today. The oo-stats package provides some insight into what is running in an OpenShift environment, but more granular details could be exposed in the future. Statistics such as I/O, throughput, sessions, and more would allow ECM to further manage OpenShift in enterprise deployments and in highly dynamic environments or where elasticity of the PaaS substrate itself is a design requirement.

Third, building an upstream library of automation actions for ManageIQ ECM so that these exercises could be saved and reused in the future would be valuable. While I only focused on a simple VM deployment in this scenario, in the future I plan to use ECM’s tagging and Event, Condition, Action construct to register Brokers and Nodes to a central puppet master (possibly via Foreman). The thought is that once automatically tagged by ECM with a “Broker” or “Node” tag an action could be taken by ECM to register the systems to the puppet master which would then configure the system appropriately. All those automation actions are exportable, but no central library exists for these at the current time to promote sharing.

Fourth, and possibly most exciting, would be the ability to request applications from OpenShift via ECM alongside requests for virtual machines. This ability would lead to the realization of a hybrid service model. As far as I’m aware, this is not provided by any other vendor in the industry. Many of the analysts are coming around to the fact that the line between IaaS and PaaS will soon be gray. Driving the ability to select an application that is PaaS friendly (python for example) and traditional infrastructure components (a relational database for example) from a single catalog would provide a simplified user experience and create new opportunities for operations to drive even higher utilization at lower costs.

I hope you found this information useful. As always, if you have feedback, please leave a comment!

Using RHN Satellite with ManageIQ ECM

Many organizations use Red Hat Network (RHN) Satellite to manage their Red Hat Enterprise Linux systems. RHN Satellite has a long and successful history of providing update, configuration, and subscription management for RHEL in the physical and virtualized datacenter. As these organizations move to a cloud model, they require other functions in addition to systems management. Capabilities such as discovery, chargeback, compliance, monitoring, and policy enforcement are important aspects of cloud models. ManageIQ’s Enterprise Cloud Management, recently acquired by Red Hat, provides these capabilities to customers.

One of the benefits of an Open Hybrid Cloud is that organizations can leverage their existing investments and gain the benefits of cloud across them. How then, could organizations gain the benefits of cloud computing while leveraging their existing investment in systems management? In this post, I’ll examine how Red Hat Network Satellite can be utilized with ManageIQ ECM to demonstrate the evolutionary approach that an Open Hybrid Cloud provides.

Here is an overview of the workflow.

RHN Satellite and ManageIQ ECM Workflow

RHN Satellite and ManageIQ ECM Workflow

  1. The operations user needs to transfer the kickstart files into customization templates in ManageIQ ECM. This is literally copying and pasting the kickstart files. It’s important to change the “reboot” option to “poweroff” in the kickstart file. If this is isn’t done, the VM will be rebooted and continually loop into an installation via PXE. Also, in the %post section of the kickstart you need to include “wget –no-check-certificate <%= evm[:callback_url_on_post_install] %>”. This will allow ECM to understand that the system has finished building and boot the VM after it has shutoff.
  2. The user requests virtual machine(s) from ECM.
  3. ECM creates an entry in the PXE environment and creates a new virtual machine from the template selected by the user.
  4. The virtual machine boots from the network and the PXE server loads the appropriate kickstart file.
  5. The virtual machine’s operating system is installed from the content in RHN Satellite.
  6. The virtual machine is registered to RHN Satellite for ongoing management.
  7. The user (or operations users) can now manage the operating system via RHN Satellite.

Here is a screencast of this workflow in action.

There are a lot of areas that can be improved upon.

  1. Utilize the RHN Satellite XMLRPC API to delete the system from RHN Satellite.
  2. Allow for automatic discovery of kickstarts in RHN Satellite from ECM.
  3. Unify the hostnames deployed to RHEVM with their matching DNS entries, so they appear the same in RHN Satellite.

Automating OpenStack deployments with Packstack

If you’d like a method to consistently deploy OpenStack in an automated fashion I’d recommend checking out packstack – an Openstack installation automation tool which utilizes puppet.

[root@rhc-05 ~]# packstack
Welcome to Installer setup utility
Should Packstack install Glance image service [y|n]  [y] : 
Should Packstack install Cinder volume service [y|n]  [y] : 
Should Packstack install Nova compute service [y|n]  [y] : 
Should Packstack install Horizon dashboard [y|n]  [y] : 
Should Packstack install Swift object storage [y|n]  [n] : y
Should Packstack install Openstack client tools [y|n]  [y] : 
Enter list of NTP server(s). Leave plain if packstack should not install ntpd on instances.: ns1.bos.redhat.com
Enter the path to your ssh Public key to install on servers  [/root/.ssh/id_rsa.pub] : 
Enter the IP address of the MySQL server  [10.16.46.104] : 
Enter the password for the MySQL admin user :
Enter the IP address of the QPID service  [10.16.46.104] : 
Enter the IP address of the Keystone server  [10.16.46.104] : 
Enter the IP address of the Glance server  [10.16.46.104] : 
Enter the IP address of the Cinder server  [10.16.46.104] : 
Enter the IP address of the Nova API service  [10.16.46.104] : 
Enter the IP address of the Nova Cert service  [10.16.46.104] : 
Enter the IP address of the Nova VNC proxy  [10.16.46.104] : 
Enter a comma separated list of IP addresses on which to install the Nova Compute services  [10.16.46.104] : 10.16.46.104, 10.16.46.106
Enter the Private interface for Flat DHCP on the Nova compute servers  [eth1] : 
Enter the IP address of the Nova Network service  [10.16.46.104] : 
Enter the Public interface on the Nova network server  [eth0] : 
Enter the Private interface for Flat DHCP on the Nova network server  [eth1] : 
Enter the IP Range for Flat DHCP  [192.168.32.0/22] : 
Enter the IP Range for Floating IP's  [10.3.4.0/22] : 
Enter the IP address of the Nova Scheduler service  [10.16.46.104] : 
Enter the IP address of the client server  [10.16.46.104] : 
Enter the IP address of the Horizon server  [10.16.46.104] : 
Enter the IP address of the Swift proxy service  [10.16.46.104] : 
Enter the Swift Storage servers e.g. host/dev,host/dev  [10.16.46.104] : 
Enter the number of swift storage zones, MUST be no bigger than the number of storage devices configured  [1] : 
Enter the number of swift storage replicas, MUST be no bigger than the number of storage zones configured  [1] : 
Enter FileSystem type for storage nodes [xfs|ext4]  [ext4] : 
Should packstack install EPEL on each server [y|n]  [n] : 
Enter a comma separated list of URLs to any additional yum repositories to install:    
To subscribe each server to Red Hat enter a username here: james.labocki
To subscribe each server to Red Hat enter your password here :

Installer will be installed using the following configuration:
==============================================================
os-glance-install:             y
os-cinder-install:             y
os-nova-install:               y
os-horizon-install:            y
os-swift-install:              y
os-client-install:             y
ntp-severs:                    ns1.bos.redhat.com
ssh-public-key:                /root/.ssh/id_rsa.pub
mysql-host:                    10.16.46.104
mysql-pw:                      ********
qpid-host:                     10.16.46.104
keystone-host:                 10.16.46.104
glance-host:                   10.16.46.104
cinder-host:                   10.16.46.104
novaapi-host:                  10.16.46.104
novacert-host:                 10.16.46.104
novavncproxy-hosts:            10.16.46.104
novacompute-hosts:             10.16.46.104, 10.16.46.106
novacompute-privif:            eth1
novanetwork-host:              10.16.46.104
novanetwork-pubif:             eth0
novanetwork-privif:            eth1
novanetwork-fixed-range:       192.168.32.0/22
novanetwork-floating-range:    10.3.4.0/22
novasched-host:                10.16.46.104
osclient-host:                 10.16.46.104
os-horizon-host:               10.16.46.104
os-swift-proxy:                10.16.46.104
os-swift-storage:              10.16.46.104
os-swift-storage-zones:        1
os-swift-storage-replicas:     1
os-swift-storage-fstype:       ext4
use-epel:                      n
additional-repo:               
rh-username:                   james.labocki
rh-password:                   ********
Proceed with the configuration listed above? (yes|no): yes

Installing:
Clean Up...                                              [ DONE ]
OS support check...                                      [ DONE ]
Running Pre install scripts...                           [ DONE ]
Installing time synchronization via NTP...               [ DONE ]
Setting Up ssh keys...                                   [ DONE ]
Create MySQL Manifest...                                 [ DONE ]
Creating QPID Manifest...                                [ DONE ]
Creating Keystone Manifest...                            [ DONE ]
Adding Glance Keystone Manifest entries...               [ DONE ]
Creating Galnce Manifest...                              [ DONE ]
Adding Cinder Keystone Manifest entries...               [ DONE ]
Checking if the Cinder server has a cinder-volumes vg... [ DONE ]
Creating Cinder Manifest...                              [ DONE ]
Adding Nova API Manifest entries...                      [ DONE ]
Adding Nova Keystone Manifest entries...                 [ DONE ]
Adding Nova Cert Manifest entries...                     [ DONE ]
Adding Nova Compute Manifest entries...                  [ DONE ]
Adding Nova Network Manifest entries...                  [ DONE ]
Adding Nova Scheduler Manifest entries...                [ DONE ]
Adding Nova VNC Proxy Manifest entries...                [ DONE ]
Adding Nova Common Manifest entries...                   [ DONE ]
Creating OS Client Manifest...                           [ DONE ]
Creating OS Horizon Manifest...                          [ DONE ]
Preparing Servers...                                     [ DONE ]
Running Post install scripts...                          [ DONE ]
Installing Puppet...                                     [ DONE ]
Copying Puppet modules/manifests...                      [ DONE ]
Applying Puppet manifests...
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_prescript.pp
Testing if puppet apply is finished : 10.16.46.104_prescript.pp OK
Testing if puppet apply is finished : 10.16.46.104_mysql.pp OK
Testing if puppet apply is finished : 10.16.46.104_qpid.pp OK
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_keystone.pp
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_glance.pp
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_cinder.pp
Testing if puppet apply is finished : 10.16.46.104_keystone.pp OK
Testing if puppet apply is finished : 10.16.46.104_cinder.pp OK
Testing if puppet apply is finished : 10.16.46.104_glance.pp OK
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_api_nova.pp
Testing if puppet apply is finished : 10.16.46.104_api_nova.pp OK
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_nova.pp
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_osclient.pp
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_horizon.pp
Testing if puppet apply is finished : 10.16.46.104_nova.pp OK
Testing if puppet apply is finished : 10.16.46.104_osclient.pp OK
Testing if puppet apply is finished : 10.16.46.104_horizon.pp OK
Applying /var/tmp/packstack/20130205-0955/manifests/10.16.46.104_postscript.pp
Testing if puppet apply is finished : 10.16.46.104_postscript.pp
Testing if puppet apply is finished : 10.16.46.104_postscript.pp OK
                            [ DONE ]

 **** Installation completed successfully ******

     (Please allow Installer a few moments to start up.....)

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * To use the command line tools source the file /root/keystonerc_admin created on 10.16.46.104
 * To use the console, browse to http://10.16.46.104/dashboard
 * The installation log file is available at: /var/tmp/packstack/20130205-0955/openstack-setup.log

You can also generate an answer file and use it on other systems

[root@rhc-06 ~]# packstack --gen-answer-file=answerfile

Be careful, if your system is subscribed to Red Hat Network’s classic entitlement packstack will register the systems via subscription-manager (certificate based entitlement). This can cause some issues if you already subscribed the system and added the OpenStack  channels via RHN.

Using a Remote ImageFactory with CloudForms

In Part 2 of Hands on with ManageIQ EVM I explored how ManageIQ and CloudForms could potentially be integrated in the future. One of the suggestions I had for the future was to allow imagefactory to run within the cloud resource provider (vsphere, RHEV, openstack, etc). This would simplify the architecture and require less infrastructure to host Cloud Engine on physical hardware. Requiring less infrastructure is important for a number of scenarios beyond just the workflow I explained in the earlier post. One scenario in particular is when one would want to provide demonstration environments of CloudForms to a large group of people – for example while training students on CloudForms.

Removing the physical hardware requirement for CloudForms Cloud Engine can be done in two ways. The first is by using nested virtualization. This is not yet available in Red Hat Enterprise Linux, but is available in the upstream – Fedora. The second is by running imagefactory remotely on a physical system and the rest of the component of CloudForms Cloud Engine within a virtual machine. In this post I’ll explore utilizing a physical system to host imagefactory and the modification necessary to a CloudForms Cloud Engine environment to make it happen.

How It Works

The diagram below illustrates the decoupling of imagefactory from conductor. Keep in mind, this is using CloudForms 1.1 on Red Hat Enterprise Linux 6.3.

Using a remote imagefactory with CloudForms

Using a remote imagefactory with CloudForms

1. The student executes a build action in their Cloud Engine. Each student has his/her own Cloud Engine and it is built on a virtual machine.

2. Conductor communicates with imagefactory on the physical cloud engine and instructs it to build the image. There is a single physical host acting as a shared imagefactory for every virtual machine hosting Cloud Engine for the students.

3. Imagefactory builds the image based on the content from virtual machines hosting CloudForms Cloud Engine.

4. Imagefactory stores the built images in the image warehouse (IWHD).

5. When the student wants to push that image to the provider, in this case RHEV they execute the action in Cloud Engine conductor.

6. Conductor communicates with imagefactory on the physical cloud engine and instructs it to push the image to the RHEV provider.

7. Imagefactory pulls the image from the warehouse (IWHD) and

8. pushes it to the provider.

9.  The student launches an application blueprint which contains the image.

10. Conductor communicates with deltacloud (dcloud) requesting that it launch the image on the provider.

11. Deltacloud (dcloud) communicates with the provider requesting that a virtual machine be created based on the template.

Configuration

Here are the steps you can follow to enable a single virtual machine hosting cloud engine to build images using a physical system’s imagefactory. These steps can be repeated and automated to stand up a large amount of virtual cloud engines that use a single imagefactory on a physical host. I don’t see any reason why you couldn’t use the RHEL host that acts as a hypervisor for RHEV or the RHEL host that acts as the export storage domain host. In fact, that might speed up performance. Anyway, here are the details.

1. Install CloudForms Cloud Engine on both the virtual-cloud-engine and physical-cloud-engine host.

2. Configure cloud engine on all the virtual-cloud-engine and physical-cloud-engine.

virtual-cloud-engine# aeolus-configure
physical-cloud-engine# aeolus-configure

3. On the virtual-cloud-engine configure RHEV as a provider.

virtual-cloud-engine# aeolus-configure -p rhevm

4. Copy the oauth information from the physical-cloud-engine to the virtual-cloud-engine.

virtual-cloud-engine# scp root@physical-cloud-engine:/etc/aeolus-conductor/oauth.json /etc/aeolus-conductor/oauth.json

5. Copy the settings for conductor from the physical-cloud-engine to the virtual-cloud-engine.

virtual-cloud-engine# scp root@physical-cloud-engine:/etc/aeolus-conductor/settings.yml /etc/aeolus-conductor/settings.yml

6.  Replace localhost with the IP address of physical-cloud-engine in the iwhd and imagefactory stanzas of /etc/aeolus-conductor/settings.yml on the virtual-cloud-engine.

7. Copy the rhevm.json file from the virtual-cloud-engine to the physical-cloud-engine.

physical-cloud-engine# scp root@virtual-cloud-engine:/etc/imagefactory/rhevm.json /etc/imagefactory/rhevm.json

8. Manually mount the RHEVM export domain listed in the rhevm.json file on the physical-cloud-engine.

physical-cloud-engine# mount nfs.server:/rhev/export /mnt/rhevm-nfs

9. After this is done, restart all the aeolus-services on both physical-cloud-engine and virtual-cloud-engine to make sure they are using the right configurations.

physical-cloud-engine# aeolus-restart-services
virtual-cloud-engine# aeolus-restart-services

Once this is complete, you should be able to build images on the remote imagefactory instance.

Multiple Cloud Engines sharing a single imagefactory

It should be noted that running a single imagefactory to support multiple Cloud Engine’s is not officially supported, and is probably not tested. In my experience, however, it seems to work. I hope to have time to post something with more details on the performance of utilizing a single imagefactory between multiple cloud engine’s performing concurrent build and push operations in the future.

Elasticity in the Open Hybrid Cloud

Several months ago in my post on Open Hybrid PaaS I mentioned that OpenShift, Red Hat’s PaaS can autoscale gears to provide elasticity to applications. OpenShift scales gears on something it calls a node, which is essentially a virtual machine with OpenShift installed on it. One thing OpenShift doesn’t focus on is scaling the underlying nodes. This is understandable, because a PaaS doesn’t necessarily understand the underlying infrastructure, nor does it necessarily want to.

It’s important that nodes are able to be autoscaled in a PaaS. I’d take this one step further and submit that it’s important that operating systems are able to be autoscaled at the IaaS layer. This is partly because many PaaS solutions will be built atop an Operating System. Even more importantly, Red Hat is all about enabling an Open Hybrid Cloud and one of the benefits Open Hybrid Cloud wants to deliver is cloud efficiency across an organizations entire datacenter and not just a part of it. If you need to statically deploy Operating Systems you fail to achieve the efficiency of cloud across all of your resources. You also can’t re-purpose or shift physical resources if you can’t autoscale operating systems.

Requirements for a Project

The background above presents the basis for some requirements for an operating system auto-scaling project.

  1. It needs to support deploying across multiple virtualization technologies. Whether a virtualization provider, IaaS private cloud, or public cloud.
  2. It needs to support deploying to physical hardware.
  3. It cannot be tied to any single vendor, PaaS, or application.
  4. It needs to be able to configure the operating systems deployed upon launch for handing over to an application.
  5. It should be licensed to promote reuse and contribution.

Workflow

Here is an idea for a project that could solve such a problem, which I call “The Governor”.

Example Workflow

Example Workflow

To explain the workflow:

  1. The application realizes it needs more resources. Monitoring of the application to determine whether it needs more resources is not within the scope of The Governor. This is by design as there are countless applications and each one of them has different requirements for scalability and elasticity. For this reason, The Governor lets the applications make the determination for when to request more resources. When the application makes this determination it makes a call to The Governor’s API.
  2. The call to the API involves the use of a certificate for authentication. This ensures that only applications that have been registered in the registry can interact with The Governor to request resources. If the certificate based authentication works (the application is registered in The Governor) then the workflow proceeds. If not, the applications request is rejected.
  3. Upon receiving an authenticated request for more resources the certificate (which is unique) is run through the rules engine to determine the rules the application must abide by when scaling. This would include decision points such as which providers can the application scale on, how many instances can the application consume, etc. If the scaling is not permitted by the rules (maximum number of instances is reached, etc) then the response is sent back to the application informing it the request has been declined.
  4. Once the rules engine determines the appropriate action it calls the orchestrator which initiates the action.
  5. The orchestrator calls either the cloud broker, which can launch instances to a variety of virtualization managers and cloud providers, either private or public, or a metal as a service (MaaS), which can provision an operating system on bare metal.
  6. and 7.  The cloud broker or MaaS launch or provision the appropriate operating system and configure it per the application’s requirements.

Future Details

There are more details which need to be further developed:

  • How certificates are generated and applications are registered.
  • How application registration details, such as the images that need to be launched and the configurations that need to be implemented on them are expressed.
  • How the configured instance is handed back to the application for use by the application.

Where to grow a community?

It matters where this project will ultimately live and grow. A project such as this one would need broad community support and a vibrant community in order to gain adoption and support to become a standard means of describing elasticity at the infrastructure layer. For this reason a community with a large number of active participants and friendly licensing which promotes contribution should be it’s location.

Tagged , , , , ,