Category Archives: Cloud Management

Using the CloudForms Web Services API

The web services API provided by CloudForms Management Engine allows users to integrate external systems with CloudForms. For example, if you wanted an existing change control system to request services from a virtualization provider or public cloud you could call the CloudForms SOAP API to initiate the virtual machine provisioning request method. Keep in mind that automate methods within CloudForms can be used to do just about anything from opening a new incident in a change management system to checking the weather in your favorite city. In this post, however, I’ll provide a simple example of how the Savon soap client and Ruby can be used to make a request to CloudForms to launch a virtual machine.

First you’ll need to install a few ruby gems on your system if you don’t already have them.

# gem install savon
....
# gem install httpclient
....
# gem install openssl
....
# gem install httpi
....
# gem install pp
....
# vi myscript.rb

The first section of the script will specify the interpreter and import the gems we installed via require.

#!/usr/bin/ruby

require 'savon'
require 'httpi'
require 'httpclient'
require 'openssl'
require 'pp'

The next section will define a ruby module named DCA which contains a class named Worker, or a collection of methods and constants. The module will help provide a namespace so the class name doesn’t clash with an already existing class that may have a similar name if you include this code in a larger body of ruby. We will also define two methods. The build_automation_request method will handle executing the request against the CloudForms Management Engine Web Services API. It accepts a hash that it will pass to the web services API. The deploy_vm method will accept some arguments, provide others within it’s body and then instantiate the build_automation_request with body of the request. This includes things such as the template_name, vlan, vm_name, etc.


module DCA
  class Worker

    def build_automation_request(body_hash)
       # We will populate this with the request
    end

    def deploy_vm(template_name, vlan, ip, subnet, gateway, vm_name, domain_name, memory, cpus, add_disk, owner_email, customization_spec = 'linux')
       # We will build the request here, then pass it to build_automation_request

    end

  end
end

With the structure in place we can add the following to the build_automation_request method. Replace YOURUSER with your username, YOURPASSWORD with your password and CFMEIPADDRESS with the IP address of the CloudForms Management Engine running with the web services role enabled.


    def build_automation_request(body_hash)
      client = Savon.client(basic_auth: ["YOURUSER", "YOURPASSWORD"], ssl_verify_mode: :none, ssl_version: :TLSv1, wsdl: "https://CFMEIPADDRESS/vmdbws/wsdl")
      evm_response = client.call(:vm_provision_request, message: body_hash)
    end

Next we will populate the deploy_vm method. The contents of the method will build several arrays and then combine them into a hash which will be passed to the build_automation_request hash we created previously.


    def deploy_vm(template_name, vlan, ip, subnet, gateway, vm_name, domain_name, memory, cpus, add_disk, owner_email, customization_spec = 'linux')
      templateFields = []
      templateFields << "name=#{template_name}"
      templateFields << "request_type=template"
      vmFields = []
      vmFields << "vm_name=#{vm_name}"
      vmFields << "number_of_vms=1"
      vmFields << "vm_memory=#{memory}"
      vmFields << "number_of_cpus=#{cpus}"
#      The options below are useful for windows systems
#      options = []
#      options << "sysprep_custom_spec=#{customization_spec}"
#      options << "sysprep_spec_override=true"
#      options << "sysprep_domain_name=#{domain_name}"
      vmFields << "addr_mode=static"
      vmFields << "ip_addr=#{ip}"
      vmFields << "subnet_mask=#{subnet}"
      vmFields << "gateway=#{gateway}"
      vmFields << "vlan=#{vlan}"
      vmFields << "provision_type=PXE"
      requester = []
      #requester << "user_name=#{user_id}"
      requester << "owner_email=#{owner_email}"
      tags = []
      #options << "add_vdisk1=#{add_disk}"
      input =  {
          'version'        =>        '1.1',
          'templateFields'        =>        templateFields.join('|'),
          'vmFields'        =>        vmFields.join('|'),
          'requester'        =>        requester.join('|'),
          'tags'        =>        tags.join('|'),
          #'options'        =>        options
      }
      pp input
      response = build_automation_request(input)
      pp response
    end

Finally, we will create a new worker object by invoking the worker class and we will call the deploy_vm method associated with the worker object overriding the arguments that we wish to use.

w = DCA::Worker.new
r = w.deploy_vm("win2k8tmpl", "VM Network", "172.28.158.125", "255.255.255.128", "172.28.158.1",
                "VMNAME", "sys.mycustomer.net", 2048, 2, 15, "yourname@yourdomain.com", "linux")
puts "Guess what! I built me a vm! #{r}"

That’s it. When this script is executed it should print out the output of your hash along with a bunch of output from the result of the request. If all goes well you should end up with a virtual machine running on your provider!

# chmod 755 myscript.rb
# ./myscript.rb
.... 
Guess what! I built me a vm!"

Keep in mind you should utilize more robust error handling if you are serious and also use something more secure for authentication between remote systems.

You can download the entire script here.

Building the Bridge Between Present and Future IT Architectures

Life isn’t easy for IT organizations today. They find themselves on the receiving end of demands for new capabilities that public cloud providers are delivering at increasing speed. While solutions within the datacenter are beginning to deliver these same capabilities in the private datacenter the IT organization doesn’t want to build yet another silo. Red Hat’s Open Hybrid Cloud Architecture is helping IT organizations adopt next generation IT architectures to meet the increasing demands for public cloud capability while helping them establish a common framework for all their IT assets. This approach provides a lot of benefits across all IT architectures. To name a few:

  • Discovery and Reporting: Detailed information about all workloads across all cloud and virtualization providers.
  • Self-Service: A single catalog which could provision services across hybrid and heterogeneous public and private clouds.
  • Best-Fit Placement: Helping identify which platform is best for which workload both at provision and run-time.

The engineers at Red Hat have been hard at work on the next release of CloudForms which is scheduled for General Availability later this year. I’ve been lucky enough to get my hands on a very early preview and wanted to share an update on two enhancements that are relevant to the topic of bridging present and future IT architectures. Before I dive into the enhancements let me get two pieces of background out of the way:

  1. Red Hat believes that the future IT architecture for Infrastructure as a Service (IaaS) is OpenStack. That shouldn’t come as a big surprise given that Red Hat was a major contributor to the Grizzly OpenStack Release and has established a community for it’s distribution called RDO.
  2. There is a big difference between datacenter virtualization and clouds and knowing which workloads should run on which is important. For more information on this you can watch Andy Cathrow’s talk at Red Hat Summit.

Two of the enhancements coming in the next release of CloudForms are the clear distinction between datacenter virtualization and cloud providers and the addition of OpenStack as a supported cloud provider.

In clearly separating and understanding the differences between datacenter virtualization (or infrastructure providers as it’s called in the user interface) and cloud providers CloudForms will understand exactly how to operationally manage and standardize operational concepts across Red Hat Enterprise Virtualization, VMware vSphere, Amazon EC2, and OpenStack.

Cloud Providers

CloudProviders

Infrastructure (Datacenter Virtualization) Providers

InfraProviders

Also, as you noticed in the previous screens CloudForms will support OpenStack as a cloud provider. This is critical to snapping in another piece of the puzzle of Red Hat’s Open Hybrid Cloud Architecture and providing all the operational management capabilities to OpenStack that IT organizations need.

OpenStack Cloud Provider

OpenStackProvider

These two enhancements will be critical for organizations who want a single pane of glass to operationally manage their Open Hybrid Cloud.

Single Pane Operational Management of RHEV, vSphere, AWS EC2, and OpenStack

SinglePane

Stay tuned for more updates regarding the next release of CloudForms!

Accelerating Service Delivery While Avoiding Silos

In a prior post on Red Hat’s Open Hybrid Cloud Architecture I discussed how IT consumers, having experienced the power of the public cloud are pressing Enterprise IT to deliver new capabilities. One of these capabilities is accelerated service delivery, or the ability to more quickly develop and release new applications that meet a businesses need. In this post I’d like to examine how the Open Hybrid Cloud Architecture provides the means to satisfy this capability and how it is different then other approaches.

There are 1000 vendors who can provide accelerated service delivery, why not just buy a product?
Many vendors will try to sell a single product as being able to accelerate service delivery. The problem with this approach is that accelerating service delivery goes far beyond a single product. This is because no single product can provide all the necessary components of application development that an IT consumer could want. Think about all the languages, frameworks, and technologies from Java, .NET, node.js to Hadoop, Casandra, Mongo to <insert your favorite technology name here>. The availability of these languages from a single product, vendor, or operating system in an optimized manner is highly unlikely. An approach that tries to accelerate service delivery within a single product or technology creates yet another silo and doesn’t solve the fundamental problem of accelerating service delivery across all an IT organization’s assets.

How can Enterprise IT provide accelerated service delivery capabilities while avoiding a silo?
By leveraging an architecture that is flexible and where each component is aware of it’s neighbors, organizations can accelerate service delivery without building a silo. Even better, having a component within your architecture that has a comprehensive understanding of every other component means virtually endless possibility for workload deployment and management. Want to deploy your workload as a VM using PXE on Red Hat Enterprise Virtualization, a template within VMWare vSphere, instances on OpenStack using Heat, or a gear in OpenShift? You can only do that if you understand each one of those technologies. Don’t build your logic for operations management into a single layer – keep it abstracted to ensure you can plug in whichever implementation of IaaS and PaaS best meets your needs. Does your application maintain too much state locally or scale vertically? Then it belongs on a traditional virtualization platform like VMware or RHEV. Is it a stateless scale out application? Then you can deploy on OpenStack. Are the languages and other dependencies available within a PaaS? Then it belongs in OpenShift. However, just deploying to each of those platforms is not enough.  What about deploying one part of your workload as gears in OpenShift and another part as instances on OpenStack at the same time? You must be able to deploy to ALL platforms within the same workload definition! The Open Hybrid Cloud Architecture is providing the foundation for such flexibility in deployment and management of workloads in the cloud.

Can you provide an example?
Let’s look at an example of a developer who would like to develop a new application for the finance team within his organization. The developer would like to utilize ruby as a web front end and utilize .NET within an IIS application server to perform some other functions. This developer expects the same capabilities that he gets using Google App Engine in that he wants to be able to push code and have it running in seconds. The user wants to request a catalog item from CloudForms which will provide them with the two components. The first is a ruby application running in the OpenShift PaaS. The second is a virtual machine running on either Red Hat Enterprise Virtualization, VMware vSphere, or Red Hat Open Stack. The service designer who designed this catalog bundle recognized that ruby applications can run in OpenShift and because OpenShift provides greater efficiencies for hosting applications then running the application within it’s own virtual machine the designer ensured that the component run in the PaaS layer. OpenShift also provides automation of the software development process which will give the end user of the designed service greater velocity in development. Since the IIS application server wasn’t available within the PaaS layer, the service designer utilized a virtual machine at the datacenter virtualization layer (vSphere) to provide this capability.

Step by Step
diagram01

1. The user requests the catalog item. CloudForms could optionally provide workflow (approval, quota, etc) and best fit placement at this point.

2. CloudForms provisions the ruby application in OpenShift Enterprise. The Ruby application is running as a gear.

3. CloudForms orchestrates the adding of an action hook into the OpenShift deployment. This can be done using any configuration management utility. I used puppet and The Foreman in my demo video below.

4. The user begins developing their ruby application. They clone the repository and then commit and push the changes.

5. The action hook within OpenShift is triggered by the deploy stage of the OpenShift lifecycle and calls CloudForms API requesting a virtual machine be created.

6. CloudForms provisions the virtual machine.

This is really just the beginning of the process, but hopefully you can see where it’s going. CloudForms can perform the deployment and tear down of the virtual machines each time a developer updates their application in OpenShift. It can even tie into other continuous integration systems to deploy application code into the IIS application server. This rapid delivery of the environment is taking place across both the PaaS and IaaS. It also doesn’t try to invent a new “standard description” across all different types of models, instead it understands the models and methods of automation within each component of the architecture and orchestrates them. While the virtual machines running at the IaaS layer don’t provide the same level of density as the PaaS, CloudForms and OpenShift can be combined to provide similar operational efficiency and expand the capabilities of OpenShift’s Accelerated Service Delivery across an IT organizations entire base of assets.

I still don’t believe you, can you show me?
Want to see it in action? Check out this short video demonstration in either Ogg or Quicktime format.

You can download the action hook here.

You can download the OpenOffice Draw Diagram here.

This is cool, what would be even cooler?
If the client tools could be intercepted by CloudForms it could provide a lot of operational management capabilities to OpenShift. For example, when `rhc app create` is run CloudForms could provide approvals, workflow, quota to the OpenShift applications. Or perhaps a future command such as `rhc app promote` could utilize the approvals and automation engine inside CloudForms to provide controlled promotions of applications through a change control process.

CloudForms and The Foreman Demonstration

The Foreman is a complete lifecycle management tool for physical and virtual servers. In other words, all that DHCP/DNS/Configuration Management/etc that is required by an operating system to function properly – The Foreman handles it, and handles it well. It also has an architecture which utilizes Smart-Proxies – which provides a restful API to underlying subsystems for distributed environments. While The Foreman understands some concepts within infrastructure management it is primarily focused on virtual and physical machine provisioning and management and managing the operating systems contained within those machines once they are deployed. Provisioning and configuration management are two areas that are very important to enterprise customers as it reduces the cost of operating while simultaneously reducing risk, specifically in the dynamic world of cloud computing.

The combination of provisioning and configuration management The Foreman provides is compelling, because it uses standard technologies that have existed for years and provides robust federation of those technologies. However, many IT organizations may already have a provisioning system or configuration management engine in place. Enterprises need a way of continuing to leverage their existing investment while planning their next generation IT architectures. CloudForms can assist with the adoption of The Foreman in existing IT architectures and also lays the groundwork for exciting new possibilities in streamlining application delivery across heterogeneous infrastructure.

CloudForms provides discovery, monitoring, eventing, control policies, chargeback, catalog based self-service – all of which are important. It also abstracts various provisioning methods for infrastructure and is not bound to a single configuration management system.

For this reason, and as you might suspect from the title of this post, it is only natural that CloudForms and The Foreman should compliment each other. Here are a few ways in which The Foreman and CloudForms can compliment each other.

  1. CloudForms can assist with the discovery and import of brown-field environments into The Foreman.
  2. CloudForms can allow users to leverage different provisioning systems while using The Foreman for configuration management.
  3. CloudForms can promote systems between environments (dev/test/prod) in The Foreman based on the data it contains or by integrating with external systems (ticketing, change control, capacity and utilization data).
  4. The Foreman can provide facts about systems to CloudForms for reporting and use in control policies, providing greater insight with less overhead.

These are just a few ideas, there are many more useful scenarios. One other scenario that may be possible soon – An application developer implements a change in a PaaS application and a change in a Virtual Machine (VM) within a IaaS provider in a development environment. Perhaps this developer needed to use a virtual machine running IIS which the PaaS doesn’t yet support.  The PaaS event (a source control check-in) and the drift detection provided by The Foreman can be correlated by CloudForms and a workflow can be initiated to re-provision the PaaS application and corresponding virtual machine to a continuous testing environment for analysis while taking into account cost, performance, and security requirements. The pieces are coming together to make this scenario a reality.

Here is a quick demonstration of how these two systems can work together. High-Res Quicktime Format

One final note: Since The Foreman will be included in a future version of Red Hat Satellite, it is likely that the integration between CloudForms and The Foreman will only improve over time.

Auto Scaling OpenShift Enterprise Infrastructure with CloudForms Management Engine

OpenShift Enterprise, Red Hat’s Platform as a Service (PaaS), handles the management of application stacks so developers can focus on writing code. The result is faster delivery of services to organizations. OpenShift Enterprise runs on infrastructure, and that infrastructure needs to be both provisioned and managed. While provisioning OpenShift Enterprise is relatively straightforward, managing the lifecycle of the OpenShift Enterprise deployment requires the same considerations as other enterprise applications such as updates and configuration management. Moreover, while OpenShift Enterprise can scale applications running within the PaaS based on demand the OpenShift Enterprise infrastructure itself is static and unaware of the underlying infrastructure. This is by design, as the mission of the PaaS is to automate the management of application stacks and it would limit flexibility to tightly couple the PaaS with the compute resources at both the physical and virtual layer. While this architectural decision is justified given the wide array of computing platforms that OpenShift Enterprise can be deployed upon (any that Red Hat Enterprise Linux can run upon) many organizations would like to not only dynamically scale their applications running in the PaaS, but dynamically scale the infrastructure supporting the PaaS itself. Organizations that are interested in scaling infrastructure in support of OpenShift Enterprise need not look further then CloudForms, Red Hat’s Open Hybrid Cloud Management Framework. CloudForms provides the capabilities to provision, manage, and scale OpenShift Enterprise’s infrastructure automatically based on policy.

For reference, the two previous posts I authored covered deploying the OpenShift Enterprise Infrastructure via CloudForms and deploying OpenShift Enterprise Applications (along with IaaS elements such as Virtual Machines) via CloudForms. Below are two screenshots of what this looks like for background.

image01

Operations User Deploying OpenShift Enterprise Infrastructure via CloudForms

image02

Self-Service User Deploying OpenShift Application via CloudForms

Let’s examine how these two automations can be combined to provide auto scaling of infrastructure to meet the demands of a PaaS. Today, most IT organizations monitor applications and respond to notifications after the event has already taken place – particularly when it comes to demand upon a particular application or service. There are a number of reasons for this approach, one of which is a result of the historical “build to spec” systems that existed in historical and currently designed application architectures. As organizations transition to developing new applications on a PaaS, however, they are presented with an opportunity to reevaluate the static and often oversubscribed nature of their IT infrastructure. In short, while applications designed in the past were not [often] built to scale dynamically based on demand, the majority of new applications are, and this trend is accelerating. Inline with this accelerating trend the infrastructure underlying these new expectations must support this new requirement or much of the business value of dynamic scalability will not be realized. You could say that an organizations dynamic scalability is bounded by their least scalable layer. This also holds true for organizations that intend to run solely on a public cloud and will leverage any resources at the IaaS layer.

Here is an example of how scalability of a PaaS would currently be handled in many IT organizations.

diagram03

The operations user is alerted by a monitoring tool that the PaaS has run out of capacity to host new or scale existing applications.

diagram04

The operations user utilizes the IaaS manager to provision new resources (Virtual Machines) for the PaaS.

diagram05

The operations user manually configures the new resources for consumption by the PaaS.

Utilizing CloudForms to deploy manage, and automatically scale OpenShift Enterprise alleviates the risk of manual configuration from the operations user while dynamically reclaiming unused capacity within the infrastructure. It also reduces the cost and complexity of maintaining a separate monitoring solution and IaaS manager. This translates to lower costs, greater uptime, and the ability to serve more end users. Here is how the process changes.

diagram06

By notification from the PaaS platform or in monitoring the infrastructure for specific conditions CloudForms detects that the PaaS Infrastructure is reaching its capacity. Thresholds can be defined by a wide array of metrics already available within CloudForms, such as aggregate memory utilized, disk usage, or CPU utilization.

diagram07

CloudForms examines conditions defined by the organization to determine whether or not the PaaS should receive more resources. In this case, it allows the PaaS to have more resources and provisions a new virtual machine to act as an OpenShift Node. At this point CloudForms could require approval of the scaling event before moving forward. The operations user or a third party system can receive an alert or event, but this is informational and not a request for the admin to perform any manual actions.

diagram08

Upon deploying the new virtual machine CloudForms configures it appropriately. This could mean installing the VM from a provisioning system or utilizing a pre-defined template and registering it to a configuration management system such as one based on puppet or chef that configure the system.

Want to see  a prototype in action? Check out the screencast I’ve recorded.

This same problem (the ability to dynamically scale a platform) exists between the IaaS and physical layer. If the IaaS layer runs out of resources it is often not aware of the physical resources available for it to consume. This problem is not found in a large number of organizations because dynamically re-purposing physical hardware has a smaller and perhaps more specialized set of use cases (think HPC, grid, deterministic workloads). Even though this is the case it should be noted that CloudForms is able to provide a similar level of policy based automation to physical hardware to extend the capacity of the IaaS layer if required.

Hybrid Service Models: IaaS and PaaS Self-Service Converge

More and more organizations are beginning to embrace both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS).  These organizations  have already begun asking why PaaS and IaaS management facilities must use different management frameworks. It only seems natural that IT organization’s customers should be able to select both IaaS and PaaS elements during their self-service workflow. Likewise, operations teams within IT organizations prefer to be able to utilize the same methods of policy, control, and automation across both IaaS and PaaS elements. In doing so operations teams could optimize workload placement both inside and outside their datacenter and reduce duplication of effort. This isn’t just a realization that customers are coming to – analysts have also been talking about the convergence of IaaS and PaaS as a natural evolutionary step in cloud computing.

Converged IaaS and PaaS

Converged IaaS and PaaS

This convergence of IaaS and PaaS is something I referred to as a Hybrid Service Model in a previous post, but you may often hear it refereed to as Converged IaaS and PaaS. There are many detriments an IT organization that does not embrace the convergence of IaaS and PaaS will face. Some of the more notable detriments include the following.

  • Developers: Slower delivery of services
    • Developers accessing two self-service portals in which the portals do not have knowledge of each others capabilities leads to slower development and greater risk of human error due to less automated processes on workload provisioning and management.
  • Operations: Less efficient use of resources
    • Operations teams managing IaaS and PaaS with two management facilities will be unable to maximize utilization of resources.
  • Management: Loss of business value
    • IT managers will be unable to capitalize efficiently without an understanding of both IaaS and PaaS models.

For these reasons and many more, it’s imperative that organizations make decisions today that will lead them to the realization of a Hybrid Service Model. There are two approaches emerging in the industry to realizing a Hybrid Service Model. The first approach is to build a completely closed or semi-open solution to allowing for a Hybrid Service Model. A good example would be a vendor offering a PaaS as long as it runs on top of a single virtualization provider (conveniently sold by them). The second approach is one in which a technology company utilizes an approach based on the tenants of an Open Hybrid Cloud to provide a fully open solution to enabling a Hybrid Service Model. I won’t go into all the reasons the second approach is better – you can read about that more here and here – but I will mention that Red Hat is committed to the Open Hybrid Cloud approach to enabling a Hybrid Service Model.

With all the background information out of the way I’d like to show you a glimpse of what will be possible due to the Open Hybrid Cloud approach at Red Hat. Red Hat is building the foundation to offer customers Hybrid Service Models alongside Hybrid Deployment Scenarios. This is possible for many reasons, but in this scenario it is primarily because of the open APIs available in OpenShift, Red Hat’s PaaS and because of the extensibility of CloudForms, Red Hat’s Hybrid Cloud Management solution. The next release of CloudForms will include a Management Engine component, based on the acquisition of ManageIQ EVM that occurred in December. Using the CloudForms Management Engine it is possible to provide self-service of applications in a PaaS along with self-service of infrastructure in IaaS from a single catalog. Here is what a possible workflow would look like.

Higher resolution viewing in quicktime format here.

Self-Service OpenShift Enterprise Deployments with ManageIQ ECM

In the previous post I examined how Red Hat Network (RHN) Satellite could be integrated with ManageIQ Enterprise Cloud Management (ECM). With this integration in place Satellite could provide ECM with the content required to install an operating system into a virtual machine and close the loop in ongoing systems management. This was just a first look and there is a lot of work to be done to enable discovery of RHN Satellite and best practice automation out of the box via ECM. That said, the combination of ECM and RHN Satellite provide a solid foundation for proceeding to use cases higher in the stack.

With this in mind, I decided to attempt automating a self-service deployment of OpenShift using ManageIQ ECM, RHN Satellite, and puppet.

Lucky for me, much of the heavy lifting had already been done by Krishna Raman and others who developed puppet modules for installing OpenShift Origin. There were several hurdles that had to be overcome with the existing puppet modules for my use case:

  1. They were built for Fedora and OpenShift Origin and I am using RHEL6 with OpenShift Enterprise. Because of this they defaulted to using newer rubygems that weren’t available in openshift enterprise yet. It took a little time to reverse engineer the puppet modules to understand exactly what they were doing and tweak them for OpenShift Enterprise.
  2. The OpenShift Origin puppet module leveraged some other puppet modules (stdlib, for example), so the puppet module tool (PMT) was needed which is not available in core puppet until > 2.7. Of course, the only version of puppet available in EPEL for RHEL 6 was puppet-2.6. I pulled an internal build of puppet-2.7 to get around this, but still required some packages from EPEL to solve dependencies.

Other then that, I was able to reuse much of what already existed and deploy OpenShift Enterprise via ManageIQ ECM. How does it work? Very similar to the Satellite use case, but with the added step of deploying puppet and a puppet master onto the deployed virtual machine and executing the puppet modules.

workflow

Workflow of OpenShift Enterprise deployment via ECM

If you are curious how the puppet modules work, here is a diagram that illustrates the flow of the openshift puppet module.

Anatomy of OpenShift Puppet Module

Anatomy of OpenShift Puppet Module

Here is a screencast of the self-service deployment in action.

There are a lot of areas that can be improved in the future. Here are four which were top of mind after this exercise.

First, runtime parameters should be able to be passed to the deployment of virtual machines. These parameters should ultimately be part of a service that could be composed into a deployment. One idea would be to expose puppet classes as services that could be added to a deployment. For example, layering a service of openshift_broker onto a virtual machine would instantiate the openshift_broker class on that machine upon deployment. The parameters required for openshift_broker would then be exposed to the user if they would like to customize them.

Second, gears within OpenShift – the execution area for applications – should be able to be monitored from ECM much like Virtual Machines are today. The oo-stats package provides some insight into what is running in an OpenShift environment, but more granular details could be exposed in the future. Statistics such as I/O, throughput, sessions, and more would allow ECM to further manage OpenShift in enterprise deployments and in highly dynamic environments or where elasticity of the PaaS substrate itself is a design requirement.

Third, building an upstream library of automation actions for ManageIQ ECM so that these exercises could be saved and reused in the future would be valuable. While I only focused on a simple VM deployment in this scenario, in the future I plan to use ECM’s tagging and Event, Condition, Action construct to register Brokers and Nodes to a central puppet master (possibly via Foreman). The thought is that once automatically tagged by ECM with a “Broker” or “Node” tag an action could be taken by ECM to register the systems to the puppet master which would then configure the system appropriately. All those automation actions are exportable, but no central library exists for these at the current time to promote sharing.

Fourth, and possibly most exciting, would be the ability to request applications from OpenShift via ECM alongside requests for virtual machines. This ability would lead to the realization of a hybrid service model. As far as I’m aware, this is not provided by any other vendor in the industry. Many of the analysts are coming around to the fact that the line between IaaS and PaaS will soon be gray. Driving the ability to select an application that is PaaS friendly (python for example) and traditional infrastructure components (a relational database for example) from a single catalog would provide a simplified user experience and create new opportunities for operations to drive even higher utilization at lower costs.

I hope you found this information useful. As always, if you have feedback, please leave a comment!

Using a Remote ImageFactory with CloudForms

In Part 2 of Hands on with ManageIQ EVM I explored how ManageIQ and CloudForms could potentially be integrated in the future. One of the suggestions I had for the future was to allow imagefactory to run within the cloud resource provider (vsphere, RHEV, openstack, etc). This would simplify the architecture and require less infrastructure to host Cloud Engine on physical hardware. Requiring less infrastructure is important for a number of scenarios beyond just the workflow I explained in the earlier post. One scenario in particular is when one would want to provide demonstration environments of CloudForms to a large group of people – for example while training students on CloudForms.

Removing the physical hardware requirement for CloudForms Cloud Engine can be done in two ways. The first is by using nested virtualization. This is not yet available in Red Hat Enterprise Linux, but is available in the upstream – Fedora. The second is by running imagefactory remotely on a physical system and the rest of the component of CloudForms Cloud Engine within a virtual machine. In this post I’ll explore utilizing a physical system to host imagefactory and the modification necessary to a CloudForms Cloud Engine environment to make it happen.

How It Works

The diagram below illustrates the decoupling of imagefactory from conductor. Keep in mind, this is using CloudForms 1.1 on Red Hat Enterprise Linux 6.3.

Using a remote imagefactory with CloudForms

Using a remote imagefactory with CloudForms

1. The student executes a build action in their Cloud Engine. Each student has his/her own Cloud Engine and it is built on a virtual machine.

2. Conductor communicates with imagefactory on the physical cloud engine and instructs it to build the image. There is a single physical host acting as a shared imagefactory for every virtual machine hosting Cloud Engine for the students.

3. Imagefactory builds the image based on the content from virtual machines hosting CloudForms Cloud Engine.

4. Imagefactory stores the built images in the image warehouse (IWHD).

5. When the student wants to push that image to the provider, in this case RHEV they execute the action in Cloud Engine conductor.

6. Conductor communicates with imagefactory on the physical cloud engine and instructs it to push the image to the RHEV provider.

7. Imagefactory pulls the image from the warehouse (IWHD) and

8. pushes it to the provider.

9.  The student launches an application blueprint which contains the image.

10. Conductor communicates with deltacloud (dcloud) requesting that it launch the image on the provider.

11. Deltacloud (dcloud) communicates with the provider requesting that a virtual machine be created based on the template.

Configuration

Here are the steps you can follow to enable a single virtual machine hosting cloud engine to build images using a physical system’s imagefactory. These steps can be repeated and automated to stand up a large amount of virtual cloud engines that use a single imagefactory on a physical host. I don’t see any reason why you couldn’t use the RHEL host that acts as a hypervisor for RHEV or the RHEL host that acts as the export storage domain host. In fact, that might speed up performance. Anyway, here are the details.

1. Install CloudForms Cloud Engine on both the virtual-cloud-engine and physical-cloud-engine host.

2. Configure cloud engine on all the virtual-cloud-engine and physical-cloud-engine.

virtual-cloud-engine# aeolus-configure
physical-cloud-engine# aeolus-configure

3. On the virtual-cloud-engine configure RHEV as a provider.

virtual-cloud-engine# aeolus-configure -p rhevm

4. Copy the oauth information from the physical-cloud-engine to the virtual-cloud-engine.

virtual-cloud-engine# scp root@physical-cloud-engine:/etc/aeolus-conductor/oauth.json /etc/aeolus-conductor/oauth.json

5. Copy the settings for conductor from the physical-cloud-engine to the virtual-cloud-engine.

virtual-cloud-engine# scp root@physical-cloud-engine:/etc/aeolus-conductor/settings.yml /etc/aeolus-conductor/settings.yml

6.  Replace localhost with the IP address of physical-cloud-engine in the iwhd and imagefactory stanzas of /etc/aeolus-conductor/settings.yml on the virtual-cloud-engine.

7. Copy the rhevm.json file from the virtual-cloud-engine to the physical-cloud-engine.

physical-cloud-engine# scp root@virtual-cloud-engine:/etc/imagefactory/rhevm.json /etc/imagefactory/rhevm.json

8. Manually mount the RHEVM export domain listed in the rhevm.json file on the physical-cloud-engine.

physical-cloud-engine# mount nfs.server:/rhev/export /mnt/rhevm-nfs

9. After this is done, restart all the aeolus-services on both physical-cloud-engine and virtual-cloud-engine to make sure they are using the right configurations.

physical-cloud-engine# aeolus-restart-services
virtual-cloud-engine# aeolus-restart-services

Once this is complete, you should be able to build images on the remote imagefactory instance.

Multiple Cloud Engines sharing a single imagefactory

It should be noted that running a single imagefactory to support multiple Cloud Engine’s is not officially supported, and is probably not tested. In my experience, however, it seems to work. I hope to have time to post something with more details on the performance of utilizing a single imagefactory between multiple cloud engine’s performing concurrent build and push operations in the future.

Hands on with ManageIQ EVM – Part2: Exploring Integration with CloudForms

In Part 1 of the Hands on with ManageIQ EVM series I walked through how easy it is to deploy and begin using EVM Suite. In part2, I’d like to explore how a workflow between EVM Suite and CloudForms can be established. This is important, because while some of the capabilities of EVM Suite and CloudForms overlap, there are vast areas where they compliment one another and together they provide a range of capabilities that is very compelling, and in many ways unmatched by any other vendors.

The Capabilities

EVM Suites’s capabilities fall squarely into providing operational management around infrastructure and virtual machines. CloudForms provides the ability to install operating systems into virtual machine containers and manage the content that is available or installed into the operating systems once the virtual machines are running. Of course, CloudForms also provides a self-service catalog which end users could interact with to deploy application blueprints. This is an area of overlap and will likely be worked out over the roadmap of the products. In this workflow, we’ll attempt to use ManageIQ as the self-service portal, but it could just as easily have been CloudForms acting as the self-service portal.

The Chicken and Egg

One of the values that needs to be maintained during the workflow is the ability for CloudForms to manage the launched instances in order to be able to provide updated content (RPM packages) to them. The problem is that the images must be built using CloudForms Cloud Engine and they are being launched by a user via EVM suite. What is needed is a way to tell the launched instance to register to CloudForms System Engine. When CloudForms Cloud Engine is used to launch the instances it could execute a service on the launched instances to have them register to CloudForms System Engine. With EVM suite launching the instances I wasn’t aware of any way in which this could be passed into the virtual machines on launch. This might be a limitation of my knowledge of EVM suite. I’ll explore this further in the future.

Catwalk

To get around the chicken and egg problem I’ve created something I call Catwalk. It is a RPM package which once installed allows a user to specify a CloudForms System Engine for the instance to register to upon boot. You can download it here.

The logic works like this:

1. On installation catwalk places a line in /etc/rc.local to execute /opt/catwalk/cfse-register.

2. cfse-register defaults to looking for a host named catwalk and attempts to download http://catwalk/pub/environment.

3. if the file “environment” exists at http://catwalk/pub/environment it is used to set the variables for catwalk.

4. if the file “environment” does not exist then a default set of variables are used by catwalk from the catwalk.conf file.

5. the catwalk cfse-register script runs subscription-manager to register the system to the system engine.

In the future it would be more elegant if catwalk registered the instance to a puppet master and utilized it to register to system engine and apply/enforce configurations. For the time being, catwalk can be slip streamed into images during the step in which CloudForms builds provider specific images. This helps us get beyond the chicken and egg problem of system registration for ongoing updates of content.

The Proposed Workflow

The diagram below illustrates the workflow I was attempting to create between CloudForms and ManageIQ EVM.

Example Workflow

Example Workflow

1. Synchronize the content for building a Red Hat Enterprise Linux virtual machine and the catwalk package into CloudForms System Engine or alternatively perform step 1a.

1a. CloudForms Cloud Engine is used to build the images. The targetcontent.xml file is edited to ensure that the catwalk package is automatically included in the images built.

<template_includes>
  <include target='rhevm' os='RHEL-6' version='3' arch='x86_64'>
    <packages>
      <package name='rhev-agent'/>
      <package name='katello-agent'/>
      <package name='aeolus-audrey-agent'/>
      <package name='catwalk'/>
    </packages>
    <repositories>
      <repository name='cf-tools'>
      <url>https://rhc-cfse2.lab.eng.bos.redhat.com/pulp/repos/Finance_Organization/Administrative/content/dist/rhel/server/6/6.3/x86_64/cf-tools/1/os/</url>
      <persisted>No</persisted>
      <clientcert>-----BEGIN CERTIFICATE-----
my unique certificate here
-----END CERTIFICATE-----
</clientcert>
      <clientkey>-----BEGIN RSA PRIVATE KEY-----
my unique key here
-----END RSA PRIVATE KEY-----
</clientkey>
      </repository>
      <repository name='catwalk'>
      <url>http://rhc-02.lab.eng.bos.redhat.com/pub/repo/catwalk/</url>
      <persisted>No</persisted>
      </repository>
    </repositories>
  </include>
</template_includes>

2. CloudForms Cloud Engine pushes the images to Red Hat Enterprise Virtualization.

3. EVM Suite discovers the templates.

4. When a user launches a virtual machine based on the template, the catwalk package executes and registers the virtual machine to CloudForms System Engine.

5. CloudForms System Engine can manage the virtual machine as it runs.

The Problem

As I went to implement this workflow I ran into a problem. Today, EVM Suite requires a gPXE server for RHEV in order to launch virtual machines. It does not support strictly launching a virtual machine from a template. I will be working with the team to determine the best way to move forward in both the short and long term to move beyond this.

Short Term Possibilities

In the short term I’m going to try to slip stream the catwalk RPM into the gPXE environment. This will hopefully allow an OS to be built via EVM and have it attach to CloudForms System Engine automatically. This effectively removes CloudForms Cloud Engine from the workflow. I’ll provide an update once I get to this point.

Future Suggestions

There are a LOT of suggestions I have floating around in my head, but here are just a few changes that would make this workflow easier and more valuable in the future.

First, include the image building capabilities within the providers of Red Hat Enterprise Virtualization, vSphere, and OpenStack. If the imagefactory was viewed as a service within the provider that EVM was aware of and could orchestrate there would be no need to include CloudForms Cloud Engine in the workflow. This would eliminate an extra piece of infrastructure required and simplify the user experience while maintaining the value of image building. The virtualization/cloud provider has the physical resources required to build images at scale anyway, so why perform it locally on a single system when you have the whole “cloud” at your disposal?

It would also be useful to build a custom action into EVM Suite that automatically deletes the registered system in CloudForms System Engine once the virtual machine is removed in EVM suite. This would automate the end of the lifecycle.

One area of further thought is creating a workflow for provisioning that is easy and flexible. For example, maintaining the image building capabilities but also allowing for installation via gPXE (or a MaaS) and being able to reconcile the differences between that and an existing image would be ideal.

 

 

Hands on with ManageIQ EVM – Part1: Deployment and Initial Configuration

As you might have heard, Red Hat has acquired ManageIQ. This is an exciting acquisition as it brings many new technologies to Red Hat that will continue to enable it to deliver on the vision of an Open Hybrid Cloud. I have begun to get hands on with ManageIQ’s EVM Suite in order to better understand where it fits in relation to Red Hat’s current products and solution, including Red Hat Enterprise Virtualization and CloudForms. I thought I’d document my experience here in the hope it might be useful to others looking to gain insight into EVM suite.

EVM is a snap to deploy. It is provided as an OVF based appliance, so it can be deployed in just about any virtualization provider. In the case of Red Hat Enterprise Virtualization (RHEV) I simply utilized the rhevm-image-uploader tool to upload the EVM appliance to my RHEV environment.

rhevm-image-uploader -r rhc-rhevm.lab.eng.bos.redhat.com:8443 -e EXPORT upload evm-v5.0.0.25.ovf

Once it was uploaded it showed up as a template in the RHEV Management (RHEV-M) console in my export domain. I then imported the template.

Upon importing the template, I created a virtual machine based on the EVM template.

Once the virtual machine was running I could immediately access the EVM console. The longest part of the whole exercise was waiting for the virtual machine to be cloned from the template.

Deployment was fast. Next I logged in and uploaded my license. The web user interface has a menu at the top which is organized into functional areas of the EVM suite. There is a section for “Settings and Operations” which allows you to configure the EVM suite and apply new fixpacks among other things.

After browsing through the configuration of the EVM appliance you’ll likely want to add a management system. In this case, I added both RHEV and vSphere as management systems within EVM. I also refreshed the relationships for the management systems so that EVM could inventory all the objects within each management system. For example, how many hosts, clusters, and virtual machines are within the provider.

Once the relationships have been refreshed, you can begin exploring the inventories of all the management systems within EVM.

I hope you’ve found this useful – Stay tuned for more hands on with ManageIQ EVM.