Scalable Infrastructure

In a previous post I outlined the common problems organizations face across both their traditional IT environments (sometimes called mode-1) and new emerging IT environments (sometimes called mode-2). These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands
  • Optimizing traditional IT environments to increase efficiency
  • Creating new development and operations practices for Emerging IT environment to  innovate faster
  • Delivering public-cloud like infrastructure that is scalable and programmable

I’d like to show you a quick demonstration of how Red Hat is delivering scalable infrastructure with the capabilities that enterprises demand. Red Hat Enterprise Linux OpenStack Platform delivers scale-out private cloud capabilities with a stable lifecycle and large ecosystem of supported hardware platforms. Many organizations are building their next generation cloud infrastructures on OpenStack because it provides an asynchronous architecture and is API centric allowing for greater scale and greater efficiency in platform management. OpenStack does not, however, provide functionality such as chargeback, reporting, and policy driven automation for tenant workloads and those projects that aspire to do so are generally focused solely on OpenStack. This is not realistic in an increasingly hybrid world – and enterprises that are serious about OpenStack need these capabilities. By using Red Hat CloudForms together with Red Hat Enterprise Linux OpenStack Platform it’s possible to provide capabilities such as reporting, chargeback, and auditing of tenant workloads across a geographically diverse deployment. In the demo below I demonstrate how chargeback across a multi-site OpenStack deployment works.

I hope you found this demonstration useful!

P.S. – If you are a Red Hatter or a Red Hat Partner, this demonstration is available in the Red Hat Product Demo System and is named “Red Hat Cloud Suite Reporting Demonstration”.

Tagged , , , , , ,

Optimizing IT

In a previous post I outlined the common problems organizations face across both their traditional IT environments (sometimes called mode-1) and new emerging IT environments (sometimes called mode-2). These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands
  • Optimizing traditional IT environments to increase efficiency
  • Creating new development and operations practices for Emerging IT environment to  innovate faster
  • Delivering public-cloud like infrastructure that is scalable and programmable

I’d like to show you a quick demonstration of how Red Hat is helping optimize traditional IT environments. There are many ways in which Red Hat does this, from discovering and right sizing virtual machines to free up space in virtual datacenters, to creating a standard operating environment across heterogeneous environments to reduce complexity. In this demonstration, however, I’ll focus on how Red Hat enables organizations to migrate workloads to their ideal platform. In the demonstration video below you’ll see how using tools found in Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform in conjunction with automation and orchestration from Red Hat CloudForms it’s possible to migrate virtual machines in an automated fashion from VMware vSphere to either RHEV or Red Hat Enterprise Linux OpenStack Platform. Keep in mind, these tools assist with the migration process, but need to be designed for your specific environment. That said, they can greatly reduce the time and effort required to move large amounts of virtual machines once designed.

I hope you found this demonstration useful!

P.S. – If you are a Red Hatter or a Red Hat Partner, this demonstration is available in the Red Hat Product Demo System and is named “Red Hat Cloud Suite Migration Demonstration”.

Tagged , , , , , ,

Accelerating Service Delivery Demonstration

In a previous post I outlined the common problems organizations face across both their traditional IT environments (sometimes called mode-1) and new emerging IT environments (sometimes called mode-2). These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands
  • Optimizing traditional IT environments to increase efficiency
  • Creating new development and operations practices for Emerging IT environment to  innovate faster
  • Delivering public-cloud like infrastructure that is scalable and programmable

I’d like to show you a quick demonstration of how Red Hat is helping accelerate service delivery for traditional IT environments. Developers or line of business users request stacks daily to create new services or test functionality. Each of these requests results in lots of work being done by operations and security teams. From creating virtual machines, to installing application servers, and even securing the systems – these tasks take time away from valuable resources that could be doing something else (like building out the next generation platform for development and operations). There are many solutions that exist for automating the deployment of virtual machines or the applications inside of the virtual machines, but Red Hat is uniquely positioned to automate both of these. By leveraging Red Hat CloudForms in conjunction with Red Hat Satellite it is possible to create a re-usable description for your application that can be automatically deployed via self-service with governance and controls across a hybrid cloud infrastructure. In the demonstration below we show the self-service automated deployment of a wordpress application consisting of HAProxy, 2 WordPress application servers, and a MariaDB database across both VMware vSphere and Red Hat Enterprise Virtualization.

P.S. – If you are a Red Hatter or a Red Hat Partner this demonstration is available in the Red Hat Product Demo System under the name “Red Hat Cloud Suite Deployment Demo”.

Tagged , , ,

Can Strategic Design Improve the Design and User Experience Across Open Source Communities?

If you speak to anyone involved in Information Technology there is little debate that an open source development model is the defacto development model for the next generation of technology. Cloud infrastructure with OpenStack, continuous integration with Jenkins, containers with Docker, automation with Ansible – these areas are all being transformed with technologies delivered via the open source development model. Even Microsoft has begun to embrace the open source development model (albeit sometimes only partially).

The use of an open source development model as a default is good news for everyone. Users (and organizations) are obtaining access to the greatest amount of innovation and can participate in development. At the same time, developers are able to increase their productivity and gain access to more collaborators. As organizations look to adopt technologies based on open source they often realize that it’s easier to purchase open source software rather than obtaining it directly from the community. There are many reasons including support, cost, focus on core business, and even indemnification that make it beneficial to purchase rather than consume directly from the community. Red Hat (where I work) is an example of a company that has provided software in exactly this way.

This model works well when organizations are using a single product based on a single open source community project. However, the open source development model poses challenges to creating a cohesive design and user experience across multiple products derived from open source projects. This problem ultimately affects the design and experience of the products organizations buy. In order for open source to continue it’s success in becoming the defacto standard a solution for the problem of coordinating design and user experience across multiple communities needs to solved.

The challenge of solving this problem should not be underestimated. If you think that influencing a single open source community is difficult, you can imagine how challenging it is to influence multiple communities in a coordinated manner. Developers in communities are purpose driven, wildly focused on their problem, and are focused on incremental progress. These developers justifiably shy away from grand plans, large product requirements documents, and any forceful attempts to change what they are building. After all, that’s the way monolithic, proprietary software lost to the fast moving and modular open source development model.

What is needed is a way to illustrate to development and community leaders how they can better satisfy their problem by working well with other communities and allow the community leaders to conclude on their own that they should work in the illustrated manner.

By practicing strategic design it may be possible to provide the clarity of vision and reasoning required to effectively influence multiple open source communities to work more cohesively. Strategic design is is the application of future-oriented design principles in order to increase an organization’s innovative and competitive qualities. Tim Brown, CEO of IDEO, summarizes some the purposes of design thinking (a major element of strategic design) really well in his Strategy by Design article.

At Red Hat we have recently begun testing this theory by organizing a design practice trial. The design practice trial team consisted of experts on various technologies from the field and engineering along with user experience experts.

IMG_2282

Part of the workshop trial team (During the final exercise on Day 3)

The premise for our design practice trial was simple:

•       Identify a common problem taking place across our customers using multiple open source products.
•       Analyze how the problem can be solved using the products.
•       Conceptualize an ideal user experience starting from a blank slate.
•       Share what was discovered with community leaders and product managers to assist with incremental improvement and influence direction toward the ideal user experience.

Identify
The common problem we found was organizations struggling to design re-usable services effectively.  The persona we identified for our trial was the service designer. The service designer identifies, qualifies, builds, and manages entries in a service catalog for self-service consumption by consumers. The service designer would like to easily design and publish re-usable entries in a catalog from the widest range of possible items.

Analyze
The products we used to analyze the current user experience are:

•       OpenStack to deliver Infrastructure as a Service (IaaS)
•       OpenShift to deliver Platform as a Service (PaaS)
•       ManageIQ (CloudForms) to deliver a Cloud Management Platform (CMP)
•       Pulp and Candlepin (Satellite) to deliver a Content Mirroring and Entitlement
•       Ansible (Tower) to deliver automation

Looking across our communities we find lots of “items” in each project. OpenStack provides Heat Templates, OpenShift provides Kubernetes Templates, Ansible provides play books, and on and on. In addition to this, the service designer would likely want to mix items from outside of these projects for use in a catalog entry, including public cloud and SaaS services.

What would it look like for the service designer to assemble all these items into an entry that could be ordered by a consumer that leads to a repeatable deployment? During a 3-day workshop we put ourselves in the shoes of the service designer and attempted to design an application.

Screen Shot 2016-01-29 at 1.49.03 PM

The Example Application

The team was able to design the catalog entry in about 8 hours and we feel we could have done this even faster if we weren’t so new to Ansible Tower.

Here is a quick video demonstration of deploying this application from the catalog as a finished product.

Or in MP4

I’ll spare every one the detailed results here (if you work at Red Hat send me a note and I’ll link you to our more complete analysis), but the exercise of analyzing the current solution allowed us to identify many areas for incremental improvement when using these products together to satisfy the use case of the service designer. We also identified longer term design questions that need to be resolved between the products (and ultimately, the upstream projects).

Conceptualize
What would the ideal user experience be for this? Another exercise we performed in our workshop was designing an ideal user experience starting from a blank slate. This included challenging assumptions while still defining some constraints of the concept. This proved to be challenging and eye opening for everyone involved. Starting from scratch with a design and not worrying about the underlying engineering that would be required is difficult for engineering minded individuals. We began developing what we believe the ideal user experience for service design would be. These will be worked into a workflow and low fidelity mockups to illustrate the basics of the experience.

Share
As next steps we will share our findings with community leaders and product managers in the hopes that it positively impacts the design and user experience. We will also continue meeting with customers who we believe suffer from the service design problem to continue to refine our proposed ideal design and to help them understand how our current solution works. If all goes well, we might even attempt a prototype or mock user interface to start. There are plenty of other angles we need to address, such as the viability of customer adoption and customer willingness to pay for such a concept. For a 3-day workshop (and lots of prep before), however, we feel we are off to a good start.

Will the community leaders accept our assertions that they can deliver a better experience by working together? Will any of the concepts and prototypes make it into the hands of customers? This remains to be seen. My hunch is that none of the communities will accept the exact conceptual design and user experience put forth by the Design Practice, but the conceptual design and user experience will positively influence the design and user experience within the open source communities and ultimately make it’s way to customers via Red Hat’s products and solutions. In any case, the more time we spend practicing design, the better the lives of our customers will become.

-James
@jameslabocki

 

Ansible Tower Dynamic Inventory from CloudForms

In my previous post I showed an example of how as part of the provisioning of a service CloudForms could be integrated with Ansible Tower to provide greater re-usability and portability of stacks across multiple infrastructure/cloud providers. Now I would like to show an example of how Ansible Tower’s Dynamic Inventory feature can be used in conjunction with the inventory in CloudForms to populate an inventory that can have job templates executed on them. Right now CloudForms has hosts and virtual machines in it’s inventory that would be useful to Ansible, but in the next version container support will allow CloudForms to pass Ansible an inventory of containers as hosts as well (that will be really interesting).

For those not familiar, Dynamic Inventory is a feature in Ansible Tower that allows users to maintain an inventory of hosts based on the data in an external system (LDAP, cobbler, CMDBs, EC2, etc) so they can integrate Ansible Tower into there existing environment instead of building a static inventory inside Ansible Tower itself. Since CloudForms can maintain discovery of existing workloads across many providers (vSphere, Hyper-V, RHEV, OpenStack, EC2 to name a few) it seems natural that it would be a great source of providing a dynamic inventory to Ansible for execution of job templates.

I authored the ansible_tower_cloudforms_inventory.py script to allow users to build a dynamic inventory in Ansible Tower that comes from CloudForms virtual machine inventory. This means that any time a user provisions a VM on vSphere, Hyper-V, OpenStack, RHEV, EC2, or other supported platform – CloudForms will automatically discover that VM and Ansible Tower will have it added to an inventory so it can be managed via Ansible Tower.

To use the script simply navigate to Ansible Tower’s setup page and select “Inventory Scripts”.

Screen Shot 2015-11-17 at 8.18.25 PMFrom there select the icon to add a new inventory script. You can now add the inventory script and name it and associate it with your organization.

Screen Shot 2015-11-17 at 8.19.05 PM

You should now see the added inventory script.

Screen Shot 2015-11-17 at 8.18.57 PM

Now within the inventory of your choosing add a new group named “Dynamic_CloudForms” and change the source to “Custom Script” and select your newly added script within “Custom Inventory Script”.

Screen Shot 2015-11-17 at 8.29.03 PM

Your new group should be added to your inventory

Screen Shot 2015-11-17 at 8.29.14 PM

One last thing needed by the script is the cloudforms.ini file. This file holds things like the hostname of your CloudForms instance, the username and password to use to authenticate, and other information. You’ll need to place this on your Ansible Tower server in /opt/rh/cloudforms.ini. I also found I had to install the request service on the Ansible Tower server (`yum install python-pip -y; pip install requests`).

Now you should be able to run the “start sync process” manually from the inventory screen (it’s an icon that looks like two arrows pointing opposite directions). You could also schedule this sync to run on a recurring basis.

Screen Shot 2015-11-17 at 9.44.48 PM

And Voila! Your inventory has been populated with names from CloudForms. The script will take into account some VMs may be powered down or actually be templates and add only those machines with a power_state of “on”.

It should be noted that the inventory script currently adds the VM based on it’s name in CloudForms. This is because my smart state analysis isn’t set up in my appliance and I don’t have any other fields available to me. What should be done eventually (when smart state is working) is changing the ansible_tower_cloudforms_inventory.py script to query for the ip address or hostname field of the VM. One more thing … I skipped a lot of the security checks on certificates (probably not a good thing). It shouldn’t be difficult to alter the script and python requests configuration to point to your certificates for a more secure experience.

Enjoy!

 

 

Tagged , , , ,

Ansible and CloudForms: Do you want to Deploy More Stacks Faster? Sure, We All Do!

Do you want to deploy more stacks faster? Sure, we all do.

By integrating Ansible Tower, Red Hat CloudForms, and Red Hat Satellite it’s possible to deploy stacks faster, more securely, and manage them after they are deployed. In this post, I’ll give a brief demonstration of what is possible when these systems are integrated. But first …

A Quick Background

For those that haven’t been following, Red Hat recently announced that it has entered an agreement to acquire Ansible. Ansible is a leading open source IT automation project and delivers an enterprise solution for IT automation via Ansible Tower.

CloudForms is a hybrid cloud management platform based on the ManageIQ community (Developed by ManageIQ who was acquired by Red Hat in 2012). CloudForms provides a myriad of functions that include:

  • Monitoring and tracking
  • Capacity management and planning
  • Resource usage and optimization
  • Workload life-cycle management
  • Policies to govern access and usage
  • Self-service portal and catalog
  • Controls to manage requests
  • Quota enforcement and usage
  • Chargeback and cost allocation
  • Automated provisioning

Finally, there is Red Hat Satellite, a platform for managing Red Hat systems. While Satellite provides many capabilities for managing systems, in this demonstration I focus on Satellite’s ability to provide trusted content and for tracking entitlements.

By combining these three powerful platforms it is possible to provide new levels of functionality to users who want to securely automate their IT environments and do it all with open source.

Integration Workflow

The diagram below illustrates one integration now possible that would allow users to combine the power of CloudForms, Red Hat Satellite, and Ansible Tower.

CFME_Ansible

Step 0 – We assume that you have already created a playbook in GitHub and added it via a SCM synchronization job in Ansible Tower. It also assumes you have synchronized trusted content to a Red Hat Satellite.

Step 1 – A user requests a self-service catalog item from CloudForms.

Step 2 – CloudForms connects to the provider and creates the virtual machine(s).

Step 3 – Upon successful creation of virtual machines CloudForms reaches out to Ansible Tower and creates host(s) in the inventory to match the virtual machine(s) created. It also initiates a job on Ansible Tower to execute the appropriate playbook(s).

Step 4 – The virtual machine(s) subscribes to the Satellite and pulls trusted content from it as part of the playbook.

This is a high level overview for a tenant workflow.

A Short Demonstration

  

The video above illustrates something I cooked up in the lab illustrating the integration workflow I described in the previous section. In this case, a user selects a self-service catalog item in CloudForms for a web server. CloudForms provisions a virtual machine on Red Hat Enterprise Virtualization. CloudForms provisions the virtual machine (from a template) and passes in a ssh key into the machine via cloud-init. Then CloudForms reaches out to Ansible Tower and adds the VM (by IP Address) to an inventory and kicks off a job manually.

The nice thing about this approach is that by using an Ansible playbook to automate the deployment of a web server it would be very easy to create another self-service catalog item on vSphere, OpenStack, or other supported infrastructure provider and recreate the same workload. With CloudForms, Ansible, and Satellite users can deploy via workflow where needed or embrace model driven deployment to increase re-usability across a wide range of infrastructures when possible.

Of course, it would be really nice to integrate identity management into this demonstration so that credentials are not being injected via cloud-init and so credentials in Ansible Tower are centralized into a proper IDM system. Also, integration into a proper IPAM system would be nice (but hey, this is just a demo).

Summary

I hope this demonstration provided you with an idea of how Ansible Tower compliments Red Hat CloudForms and Red Hat Satellite to allow for automation of stacks. It should be noted that another key is that the more automation that takes place in playbooks in Ansible, the more portable (and presumably more maintainable) it is for end users.

Source

As is usually always the case with most all things I’ve written … you should have a professional software developer, creative person, or lawyer re-write it as appropriate.

Ansible Playbook

Cloud-init script

CloudForms Automate Method (LabCorp/Infrastructure/VM/Provisioning/StateMachines/Methods/redhat_PostProvision)

Contents of /root/rhsm.sh on the VM template (referenced in Ansible Playbook)