Optimizing IT

In a previous post I outlined the common problems organizations face across both their traditional IT environments (sometimes called mode-1) and new emerging IT environments (sometimes called mode-2). These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands
  • Optimizing traditional IT environments to increase efficiency
  • Creating new development and operations practices for Emerging IT environment to  innovate faster
  • Delivering public-cloud like infrastructure that is scalable and programmable

I’d like to show you a quick demonstration of how Red Hat is helping optimize traditional IT environments. There are many ways in which Red Hat does this, from discovering and right sizing virtual machines to free up space in virtual datacenters, to creating a standard operating environment across heterogeneous environments to reduce complexity. In this demonstration, however, I’ll focus on how Red Hat enables organizations to migrate workloads to their ideal platform. In the demonstration video below you’ll see how using tools found in Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform in conjunction with automation and orchestration from Red Hat CloudForms it’s possible to migrate virtual machines in an automated fashion from VMware vSphere to either RHEV or Red Hat Enterprise Linux OpenStack Platform. Keep in mind, these tools assist with the migration process, but need to be designed for your specific environment. That said, they can greatly reduce the time and effort required to move large amounts of virtual machines once designed.

I hope you found this demonstration useful!

P.S. – If you are a Red Hatter or a Red Hat Partner, this demonstration is available in the Red Hat Product Demo System and is named “Red Hat Cloud Suite Migration Demonstration”.

Tagged , , , , , ,

Accelerating Service Delivery Demonstration

In a previous post I outlined the common problems organizations face across both their traditional IT environments (sometimes called mode-1) and new emerging IT environments (sometimes called mode-2). These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands
  • Optimizing traditional IT environments to increase efficiency
  • Creating new development and operations practices for Emerging IT environment to  innovate faster
  • Delivering public-cloud like infrastructure that is scalable and programmable

I’d like to show you a quick demonstration of how Red Hat is helping accelerate service delivery for traditional IT environments. Developers or line of business users request stacks daily to create new services or test functionality. Each of these requests results in lots of work being done by operations and security teams. From creating virtual machines, to installing application servers, and even securing the systems – these tasks take time away from valuable resources that could be doing something else (like building out the next generation platform for development and operations). There are many solutions that exist for automating the deployment of virtual machines or the applications inside of the virtual machines, but Red Hat is uniquely positioned to automate both of these. By leveraging Red Hat CloudForms in conjunction with Red Hat Satellite it is possible to create a re-usable description for your application that can be automatically deployed via self-service with governance and controls across a hybrid cloud infrastructure. In the demonstration below we show the self-service automated deployment of a wordpress application consisting of HAProxy, 2 WordPress application servers, and a MariaDB database across both VMware vSphere and Red Hat Enterprise Virtualization.

P.S. – If you are a Red Hatter or a Red Hat Partner this demonstration is available in the Red Hat Product Demo System under the name “Red Hat Cloud Suite Deployment Demo”.

Tagged , , ,

Can Strategic Design Improve the Design and User Experience Across Open Source Communities?

If you speak to anyone involved in Information Technology there is little debate that an open source development model is the defacto development model for the next generation of technology. Cloud infrastructure with OpenStack, continuous integration with Jenkins, containers with Docker, automation with Ansible – these areas are all being transformed with technologies delivered via the open source development model. Even Microsoft has begun to embrace the open source development model (albeit sometimes only partially).

The use of an open source development model as a default is good news for everyone. Users (and organizations) are obtaining access to the greatest amount of innovation and can participate in development. At the same time, developers are able to increase their productivity and gain access to more collaborators. As organizations look to adopt technologies based on open source they often realize that it’s easier to purchase open source software rather than obtaining it directly from the community. There are many reasons including support, cost, focus on core business, and even indemnification that make it beneficial to purchase rather than consume directly from the community. Red Hat (where I work) is an example of a company that has provided software in exactly this way.

This model works well when organizations are using a single product based on a single open source community project. However, the open source development model poses challenges to creating a cohesive design and user experience across multiple products derived from open source projects. This problem ultimately affects the design and experience of the products organizations buy. In order for open source to continue it’s success in becoming the defacto standard a solution for the problem of coordinating design and user experience across multiple communities needs to solved.

The challenge of solving this problem should not be underestimated. If you think that influencing a single open source community is difficult, you can imagine how challenging it is to influence multiple communities in a coordinated manner. Developers in communities are purpose driven, wildly focused on their problem, and are focused on incremental progress. These developers justifiably shy away from grand plans, large product requirements documents, and any forceful attempts to change what they are building. After all, that’s the way monolithic, proprietary software lost to the fast moving and modular open source development model.

What is needed is a way to illustrate to development and community leaders how they can better satisfy their problem by working well with other communities and allow the community leaders to conclude on their own that they should work in the illustrated manner.

By practicing strategic design it may be possible to provide the clarity of vision and reasoning required to effectively influence multiple open source communities to work more cohesively. Strategic design is is the application of future-oriented design principles in order to increase an organization’s innovative and competitive qualities. Tim Brown, CEO of IDEO, summarizes some the purposes of design thinking (a major element of strategic design) really well in his Strategy by Design article.

At Red Hat we have recently begun testing this theory by organizing a design practice trial. The design practice trial team consisted of experts on various technologies from the field and engineering along with user experience experts.

IMG_2282

Part of the workshop trial team (During the final exercise on Day 3)

The premise for our design practice trial was simple:

•       Identify a common problem taking place across our customers using multiple open source products.
•       Analyze how the problem can be solved using the products.
•       Conceptualize an ideal user experience starting from a blank slate.
•       Share what was discovered with community leaders and product managers to assist with incremental improvement and influence direction toward the ideal user experience.

Identify
The common problem we found was organizations struggling to design re-usable services effectively.  The persona we identified for our trial was the service designer. The service designer identifies, qualifies, builds, and manages entries in a service catalog for self-service consumption by consumers. The service designer would like to easily design and publish re-usable entries in a catalog from the widest range of possible items.

Analyze
The products we used to analyze the current user experience are:

•       OpenStack to deliver Infrastructure as a Service (IaaS)
•       OpenShift to deliver Platform as a Service (PaaS)
•       ManageIQ (CloudForms) to deliver a Cloud Management Platform (CMP)
•       Pulp and Candlepin (Satellite) to deliver a Content Mirroring and Entitlement
•       Ansible (Tower) to deliver automation

Looking across our communities we find lots of “items” in each project. OpenStack provides Heat Templates, OpenShift provides Kubernetes Templates, Ansible provides play books, and on and on. In addition to this, the service designer would likely want to mix items from outside of these projects for use in a catalog entry, including public cloud and SaaS services.

What would it look like for the service designer to assemble all these items into an entry that could be ordered by a consumer that leads to a repeatable deployment? During a 3-day workshop we put ourselves in the shoes of the service designer and attempted to design an application.

Screen Shot 2016-01-29 at 1.49.03 PM

The Example Application

The team was able to design the catalog entry in about 8 hours and we feel we could have done this even faster if we weren’t so new to Ansible Tower.

Here is a quick video demonstration of deploying this application from the catalog as a finished product.

Or in MP4

I’ll spare every one the detailed results here (if you work at Red Hat send me a note and I’ll link you to our more complete analysis), but the exercise of analyzing the current solution allowed us to identify many areas for incremental improvement when using these products together to satisfy the use case of the service designer. We also identified longer term design questions that need to be resolved between the products (and ultimately, the upstream projects).

Conceptualize
What would the ideal user experience be for this? Another exercise we performed in our workshop was designing an ideal user experience starting from a blank slate. This included challenging assumptions while still defining some constraints of the concept. This proved to be challenging and eye opening for everyone involved. Starting from scratch with a design and not worrying about the underlying engineering that would be required is difficult for engineering minded individuals. We began developing what we believe the ideal user experience for service design would be. These will be worked into a workflow and low fidelity mockups to illustrate the basics of the experience.

Share
As next steps we will share our findings with community leaders and product managers in the hopes that it positively impacts the design and user experience. We will also continue meeting with customers who we believe suffer from the service design problem to continue to refine our proposed ideal design and to help them understand how our current solution works. If all goes well, we might even attempt a prototype or mock user interface to start. There are plenty of other angles we need to address, such as the viability of customer adoption and customer willingness to pay for such a concept. For a 3-day workshop (and lots of prep before), however, we feel we are off to a good start.

Will the community leaders accept our assertions that they can deliver a better experience by working together? Will any of the concepts and prototypes make it into the hands of customers? This remains to be seen. My hunch is that none of the communities will accept the exact conceptual design and user experience put forth by the Design Practice, but the conceptual design and user experience will positively influence the design and user experience within the open source communities and ultimately make it’s way to customers via Red Hat’s products and solutions. In any case, the more time we spend practicing design, the better the lives of our customers will become.

-James
@jameslabocki

 

Ansible Tower Dynamic Inventory from CloudForms

In my previous post I showed an example of how as part of the provisioning of a service CloudForms could be integrated with Ansible Tower to provide greater re-usability and portability of stacks across multiple infrastructure/cloud providers. Now I would like to show an example of how Ansible Tower’s Dynamic Inventory feature can be used in conjunction with the inventory in CloudForms to populate an inventory that can have job templates executed on them. Right now CloudForms has hosts and virtual machines in it’s inventory that would be useful to Ansible, but in the next version container support will allow CloudForms to pass Ansible an inventory of containers as hosts as well (that will be really interesting).

For those not familiar, Dynamic Inventory is a feature in Ansible Tower that allows users to maintain an inventory of hosts based on the data in an external system (LDAP, cobbler, CMDBs, EC2, etc) so they can integrate Ansible Tower into there existing environment instead of building a static inventory inside Ansible Tower itself. Since CloudForms can maintain discovery of existing workloads across many providers (vSphere, Hyper-V, RHEV, OpenStack, EC2 to name a few) it seems natural that it would be a great source of providing a dynamic inventory to Ansible for execution of job templates.

I authored the ansible_tower_cloudforms_inventory.py script to allow users to build a dynamic inventory in Ansible Tower that comes from CloudForms virtual machine inventory. This means that any time a user provisions a VM on vSphere, Hyper-V, OpenStack, RHEV, EC2, or other supported platform – CloudForms will automatically discover that VM and Ansible Tower will have it added to an inventory so it can be managed via Ansible Tower.

To use the script simply navigate to Ansible Tower’s setup page and select “Inventory Scripts”.

Screen Shot 2015-11-17 at 8.18.25 PMFrom there select the icon to add a new inventory script. You can now add the inventory script and name it and associate it with your organization.

Screen Shot 2015-11-17 at 8.19.05 PM

You should now see the added inventory script.

Screen Shot 2015-11-17 at 8.18.57 PM

Now within the inventory of your choosing add a new group named “Dynamic_CloudForms” and change the source to “Custom Script” and select your newly added script within “Custom Inventory Script”.

Screen Shot 2015-11-17 at 8.29.03 PM

Your new group should be added to your inventory

Screen Shot 2015-11-17 at 8.29.14 PM

One last thing needed by the script is the cloudforms.ini file. This file holds things like the hostname of your CloudForms instance, the username and password to use to authenticate, and other information. You’ll need to place this on your Ansible Tower server in /opt/rh/cloudforms.ini. I also found I had to install the request service on the Ansible Tower server (`yum install python-pip -y; pip install requests`).

Now you should be able to run the “start sync process” manually from the inventory screen (it’s an icon that looks like two arrows pointing opposite directions). You could also schedule this sync to run on a recurring basis.

Screen Shot 2015-11-17 at 9.44.48 PM

And Voila! Your inventory has been populated with names from CloudForms. The script will take into account some VMs may be powered down or actually be templates and add only those machines with a power_state of “on”.

It should be noted that the inventory script currently adds the VM based on it’s name in CloudForms. This is because my smart state analysis isn’t set up in my appliance and I don’t have any other fields available to me. What should be done eventually (when smart state is working) is changing the ansible_tower_cloudforms_inventory.py script to query for the ip address or hostname field of the VM. One more thing … I skipped a lot of the security checks on certificates (probably not a good thing). It shouldn’t be difficult to alter the script and python requests configuration to point to your certificates for a more secure experience.

Enjoy!

 

 

Tagged , , , ,

Ansible and CloudForms: Do you want to Deploy More Stacks Faster? Sure, We All Do!

Do you want to deploy more stacks faster? Sure, we all do.

By integrating Ansible Tower, Red Hat CloudForms, and Red Hat Satellite it’s possible to deploy stacks faster, more securely, and manage them after they are deployed. In this post, I’ll give a brief demonstration of what is possible when these systems are integrated. But first …

A Quick Background

For those that haven’t been following, Red Hat recently announced that it has entered an agreement to acquire Ansible. Ansible is a leading open source IT automation project and delivers an enterprise solution for IT automation via Ansible Tower.

CloudForms is a hybrid cloud management platform based on the ManageIQ community (Developed by ManageIQ who was acquired by Red Hat in 2012). CloudForms provides a myriad of functions that include:

  • Monitoring and tracking
  • Capacity management and planning
  • Resource usage and optimization
  • Workload life-cycle management
  • Policies to govern access and usage
  • Self-service portal and catalog
  • Controls to manage requests
  • Quota enforcement and usage
  • Chargeback and cost allocation
  • Automated provisioning

Finally, there is Red Hat Satellite, a platform for managing Red Hat systems. While Satellite provides many capabilities for managing systems, in this demonstration I focus on Satellite’s ability to provide trusted content and for tracking entitlements.

By combining these three powerful platforms it is possible to provide new levels of functionality to users who want to securely automate their IT environments and do it all with open source.

Integration Workflow

The diagram below illustrates one integration now possible that would allow users to combine the power of CloudForms, Red Hat Satellite, and Ansible Tower.

CFME_Ansible

Step 0 – We assume that you have already created a playbook in GitHub and added it via a SCM synchronization job in Ansible Tower. It also assumes you have synchronized trusted content to a Red Hat Satellite.

Step 1 – A user requests a self-service catalog item from CloudForms.

Step 2 – CloudForms connects to the provider and creates the virtual machine(s).

Step 3 – Upon successful creation of virtual machines CloudForms reaches out to Ansible Tower and creates host(s) in the inventory to match the virtual machine(s) created. It also initiates a job on Ansible Tower to execute the appropriate playbook(s).

Step 4 – The virtual machine(s) subscribes to the Satellite and pulls trusted content from it as part of the playbook.

This is a high level overview for a tenant workflow.

A Short Demonstration

  

The video above illustrates something I cooked up in the lab illustrating the integration workflow I described in the previous section. In this case, a user selects a self-service catalog item in CloudForms for a web server. CloudForms provisions a virtual machine on Red Hat Enterprise Virtualization. CloudForms provisions the virtual machine (from a template) and passes in a ssh key into the machine via cloud-init. Then CloudForms reaches out to Ansible Tower and adds the VM (by IP Address) to an inventory and kicks off a job manually.

The nice thing about this approach is that by using an Ansible playbook to automate the deployment of a web server it would be very easy to create another self-service catalog item on vSphere, OpenStack, or other supported infrastructure provider and recreate the same workload. With CloudForms, Ansible, and Satellite users can deploy via workflow where needed or embrace model driven deployment to increase re-usability across a wide range of infrastructures when possible.

Of course, it would be really nice to integrate identity management into this demonstration so that credentials are not being injected via cloud-init and so credentials in Ansible Tower are centralized into a proper IDM system. Also, integration into a proper IPAM system would be nice (but hey, this is just a demo).

Summary

I hope this demonstration provided you with an idea of how Ansible Tower compliments Red Hat CloudForms and Red Hat Satellite to allow for automation of stacks. It should be noted that another key is that the more automation that takes place in playbooks in Ansible, the more portable (and presumably more maintainable) it is for end users.

Source

As is usually always the case with most all things I’ve written … you should have a professional software developer, creative person, or lawyer re-write it as appropriate.

Ansible Playbook

Cloud-init script

CloudForms Automate Method (LabCorp/Infrastructure/VM/Provisioning/StateMachines/Methods/redhat_PostProvision)

Contents of /root/rhsm.sh on the VM template (referenced in Ansible Playbook)

DevOps in a Bi-Modal World

The business environment has never been more competitive and disruptive than it is today. Businesses need to come to terms with three realities:

  1. They need a continuous competitive advantage

Just ask Kodak who has seen the camera business transform from a standalone device to a feature on every mobile phone with new players like Snapfish, Shutterfly, and Chatbooks creating new ways of engaging with markets. If you don’t have a way of continually developing new competitive advantages you will not be relevant for long.

  1. They are a software company

Bank of America is not just a bank, they are a transaction processing company. Exxon Mobil, is not only an oil and gas company, they are a GIS company. With each passing day Walgreens business is more reliant on electronic health records.

  1. Their competition is everywhere

Ten years ago if I asked you who the biggest competitive threats were to Fedex names like UPS, and DHL might come to mind. Increasingly Fedex, UPS, and DHL face threats from Uber, Walmart, Amazon, and others who may enter their market of logistics with new ways of reaching customers.

What do businesses need to do given these three realities?

To quote Mark Zuckerberg, they need to “Move fast and be stable”. Moving fast and being stable can be translated to more quickly developing new services that could be scaled to meet fast growing demand if needed but also with an extremely low cost of failure should they not work. In other words, cheap experiments need to be able to become global successes.

The scientists conducting these cheap experiments are software developers. Lines of business naturally turn towards their development teams to request new services at an increasingly faster rate. The problem is, developers can’t obtain those environments fast enough from operations because traditional processes and non-flexible infrastructure and applications stand in the way. It’s no surprise then, according to a 2012 McKinsey and Company study [1], that software delivery in the enterprises surveyed was 45% over budget, 7% over on time, and has 56% less value than expected when delivered.

This is no secret to businesses and they are looking to new methods and designs to help improve these metrics. In fact:

  • Over 90 percent are running or experimenting with Infrastructure as a Service [2].
  • Greater than 70 percent expect to use Platform as a Service in their organization [3].
  • More than 90 percent expect new investments in DevOps enabling technologies in the next two years [4].
  • Over 70 are using or anticipate using containers for cloud applications.

Businesses are turning to new development and operations processes, new cloud infrastructures, and application methodologies that are conducive to these new processes and infrastructures. Looking at one of the leaders in public cloud, Amazon Web Services, we see they use these same principles and designs to achieve upwards of 10,000 releases per hour (as much as a release every 12 seconds) with a very low outage rate caused by these releases.

At first glance it would appear that enterprises could simply yell “DevOps and Cloud to the Rescue!” and solve their problem of deploying faster on scalable infrastructure, but the reality is far from that. Enterprises have existing assets and investments, and many of these are not going away anytime soon. In fact, the existing systems and processes most likely power the very core of the business and cannot simply be replaced over night nor would they fit the paradigm of moving quickly and experimenting. Gartner coined the term Bi-modal to describe this approach of two modes of delivery for IT – one focused on agility and speed and the other on stability and accuracy.

Gartner has also recognized an approach that enterprises can take that would allow them to maximize the use of their existing assets. In their research “DevOps in the Bimodal Bridge” [7] they suggest an approach where the patterns and practices of DevOps can be applied to existing assets (mode 1) to make it more agile and efficient.

I have observed this trend and I believe most organizations are trying to address four key problems across their emerging bi-modal world.

In mode-1 they are looking to increase relevance and reduce complexity. In order to increase relevance they need to deliver environments for developers in minutes instead of days or weeks. In order to reduce complexity they need to implement policy driven automation to reduce the need for manual tasks.

In mode-2 they are looking to improve agility and increase scalability. In order to improve agility they need to create more agile development and operations processes and embrace new application architectures that allow for greater rates of change through decreased dependencies. In order to increase scalability they need to implement infrastructure that utilizes an asynchronous design and is entirely API driven in order to change the admin to host ratio from a linear to an exponential model in order to increase scalability.

In order to make these examples more concrete, let’s look at each of them in more detail.

Increasing Relevance by Accelerating Service Delivery

Delivering development and test environments to developers in many enterprises generally starts with either a request to a service management system or a tap on the shoulder of a system administrator. This usually depends on the size of the organization and maturity of the IT department. Either way, once requests fall into a service management system there are often many teams that need to perform tasks to deliver the environment to the developer. These might include virtual infrastructure administrators, systems administrators, and security operations. In larger organizations you could expect to see disaster recovery teams, networking teams, and many others involved in this process too. Again, depending on the maturity of the organization how all of this is coordinated could range from taps on shoulders to passing tickets around in a service management system.

At best each team takes minutes or hours to respond and perform some manual tasks and often the person who requests the service must be asked follow up questions (“Are you sure you need 16GB of RAM?”, “What version of Java do you need for this?”). The result is lots of highly skilled people spending lots of time and very slow delivery of this environment to the developer. Multiply this by the number of developers in an organization and the number of requests for environments and you can understand why traditional IT processes and systems are struggling to maintain relevance.

A solution for this problem is to introduce a service designer into the process (you may be familiar with this from ITIL) that can enable self-service consumption of everything developers need. The designer works with all stakeholders including virtual infrastructure administrators, system administrators, and security operations to obtain requirements. Then, the designer builds the necessary configuration management content and couples it with a service catalog item. By invoking this catalog item the environment can be deployed automatically across any number of providers including virtualization providers, private, or public cloud.

The result of this solution is that all the teams responsible for delivering an environment are now free to do more valuable work (like working with development to design operations processes that work as part of development instead of being bolted on after). It also removes human error from the equation, and most importantly, it delivers the environment in significantly less time. We have seen upwards of a 95 percent improvement in delivery times in many of our customers [9].

Reducing Complexity by Optimizing IT

Speeding up delivery of environments to developers or end users is a great way to make IT more relevant, but a lot of what IT is spending their time on is the day-to-day management of those environments. If IT is spending so much time on day-to-day tasks how can we expect them to deploy the next generation of scalable and programmable infrastructure or have time to work with development teams during early stages of development to increase agility?

I have found that many virtual infrastructure administrators spend time on several common tasks that should be largely automated through policy.

First, are policies around workload placement. Often one virtual infrastructure cluster will be running hot while another one is completely cold. This leads to operations teams being inundated by calls from the owners of applications running on the hot cluster asking why response times are poor. Automating this balancing through control policies can alleviate this problem and keep virtual infrastructure administrators free to other things.

Next is the ability to quickly move workloads between different infrastructures. This has become increasingly important as organizations looks to adopt scale-out IaaS clouds. Operations leadership realizes if they can identify workloads that do not need to run on (typically) more expensive virtual infrastructure they could save money by moving those workloads to their IaaS private cloud. This migration is typically a manual process and it’s also difficult to even understand what workloads can be moved. By having a systematic and automated way of identifying and migrating workloads enterprises can save time and move workloads quickly to reduce costs.

Yet another issue is ensuring compliance and governance requirements are met, particularly with workloads running on new infrastructures, like an OpenStack based private cloud. Not knowing what users, groups, data, applications, and packages reside on systems running across a heterogeneous mix of infrastructure presents a large risk and operations teams often have the responsibility and obligation of ensuring this risk is minimized. By being able to introspect workloads across platforms operations teams can gain insight into exactly what users, data, and packages are running on systems and leverage the migration capabilities I mentioned previously to make sure systems are running on appropriate providers.

Finally, since IT has often become a broker of public cloud services it’s important that they can account for costs and place workloads on appropriate regions in the public cloud to control costs while also ensuring service levels for end users are maintained. If developers are based in Singapore then we should leverage public cloud infrastructure in that location instead of deploying to a more expensive and more latent public cloud infrastructure in Tokyo.

By implementing policy based automations our customers have seen large improvements in their resource utilization and a reduction in CapEx and OpEx per workload managed [10].

Improving Agility by Modernizing Development and Operations

With resources now free from handling each and every inbound request for an environment and being confident that those environments are running efficiently and securely on the right providers operations teams can begin to work with development teams to design new processes for their cloud native applications.

These newly designed processes and cross-functional team structure combined with a platforms that supports running the broadest amount of languages and frameworks within microservices based architectures will enable the development and operations teams to achieve higher release frequencies. By utilizing microservices and standardized platforms and configurations these new applications will allow for independent release and scaling of components of the application.

This results in an increased success rate of change, faster cycle time, and the ability to scale specific services independently, making the life of both development and operations teams easier and allowing them to meet the needs of the line of business. We have experience doing this with very large software development organizations [11].

Scalability with Programmable Infrastructure

As agility of development and operations processes is improved and release frequency increased, so to does the demand for more scalable infrastructure to run those releases on. Operations teams face the challenge of delivering infrastructure that will scale to meet the demand of this ever-growing number of applications. The last thing the head of operations would like to have to explain to the management team of a company is why an extremely successful new application was hitting a wall as to the maximum number of users it could support. This simply can’t happen. Unfortunately, the current infrastructure is not scalable, neither from a financial nor technical standpoint.

One option might be to build out a scale-out infrastructure, perhaps based on OpenStack, the leading open source project for infrastructure-as-a-service. However, the operations team doesn’t want to spend it’s time taking open source code and making it consumable and sustainable for the enterprise. It doesn’t have the resources to test and certify that OpenStack will work with each new piece of hardware it brings in. It also can’t afford to maintain the code base for long periods of time with the resources available. Finally, OpenStack is missing key features that operations needs and they don’t want to develop those in house as well.

What operations really needs is a way to minimize cost and increase scale through the use of commodity hardware and a massively scalable distributed architecture coupled with the enterprise management features required to operate that infrastructure and a stable, tested, certified way of consuming the open source projects that make up that infrastructure. By having this, operations can deploy scale-out infrastructure in multiple locations and still aggregate management functions like chargeback, utilization, governance, and workflows into a single logical location. Many of our customers have found this solution beneficial in reducing cost and ensuring stability at scale [12].

Introducing Red Hat Cloud Suite

Red Hat Cloud Suite is a family of suites from Red Hat that brings together all the award winning products from Red Hat in a consistent way to solve specific problems. It allows IT to accelerate service delivery and optimize their existing assets while allowing them to build their next generation infrastructure and application platforms to support massive salability and more agile development and operations processes. In other words, it meets them where they are and lays the foundation for where they want to go.

A Different Approach

It should come as no surprise to you that Red Hat is not the only company solving these problems. Red Hat is, however, one of the few companies that can solve all of these problems because of its broad portfolio of technologies and expertise. Most think of Red Hat as having the largest percentage of the paid Linux market share. That is true, but Red Hat has been adding to its portfolio and has grown acquired expertise and industry leading technology from Software Defined Storage [13][14] to Mobile Development Platforms [15]. These offerings place Red Hat alone with only Microsoft in terms of depth of capability.

An Important Difference

Along with this depth of expertise and capability comes an approach that sets Red Hat apart. Red Hat is the only vendor that uses an open source development model for all of the solutions it delivers. This is important for customers because the world of cloud infrastructure and applications and DevOps is built entirely on open source software. By having a strict open source only mentality customers can have access to the greatest amount of innovation and be ensured that as technologies change they could adopt them more easily because Red Hat can adopt and deliver these technologies. Two great examples of this are how Red Hat adopted the KVM hypervisor [16] and embraced and delivered it’s container platform with support for Docker and Kubernetes [17] – leading open source projects that become popular in a short amount of time. Red Hat is committed to the open source development model, so much so that it even creates communities when it acquires non-open source licensed technologies [18]. Customers should know that when they leverage a solution from Red Hat it is based entirely on open source, leading to greater access to innovation and lower exit costs.

Technical Capabilities are Important Too

While philosophical differences are important for ensuring that the right long term decisions have been made, Red Hat is also at the forefront of innovation in cloud infrastructure, applications, and DevOps tools.

True Hybrid Support

The term hybrid cloud has often been over used and abused, but it is important. Enterprises need to be able to run workloads across the four major deployment models that exist today: physical, virtual, private, and public cloud. Equally as important to the deployment model is the ability to support multiple service models, such infrastructure-as-a-service, platform-as-a-service, or even bare metal, virtual machines running on scale-up virtual infrastructure, and public cloud services. When most vendors claim they support “hybrid” cloud they are typically limited to only managing hybrid deployment models. Red Hat supports both hybrid deployment models and hybrid service models. This is important to both Development and Operations teams. For developers, it means being able to develop on the broadest choice of languages and frameworks. They could use an Oracle database running on bare-metal or virtual machine, JBoss EAP running on virtual machines on OpenStack, combined with Node.js and Ruby running in Containers on OpenShift. They are not constrained to a single service model that doesn’t give them everything they need.

Using Big Data to Optimize IT

Red Hat has been supporting Linux for a long time. In fact, we’ve been supporting Red Hat Enterprise Linux for over 13 years since RHEL AS 2.1’s release in 2002 [16]. There are over 700 Red Hat Certified Engineers in our support organization and they’ve documented over 30,000 solutions while resolving over 1 million technical issues. The Red Hat customer portal has won plenty of awards for helping connect customers searching for resolution to an issue to the right technical solution. With Red Hat Access Insights, Red Hat’s new predictive analytics service, connecting support data to recommendations is going to reach a new level of ease of use. Users can send small amounts of data about their environment back to Red Hat and it will be compared to optimal configurations to find opportunities to improve security, reliability, availability, and performance. This service is already available for Red Hat Enterprise Linux and will soon be available for all the technologies in Red Hat’s portfolio through Red Hat Cloud Suite.

An Easy On-ramp and Consistent Lifecycle

Deploying a private cloud is not an easy task. The list of platforms that need to come together from configuration management, to storage, to infrastructure-as-a-service, to platform-as-a-service is large. Each of these has dependencies on sub-components within each of these platforms. For example, to generate new docker images need secure content and that takes integration between the content management system and the image building services. Literally hundreds of these integrations are needed to build a fully functional private cloud. This usually results in one of two options:

  • Operations requiring lots and lots of time to deliver this private cloud.
  • An army of high priced consultants arriving to deliver and maintain a private cloud.

Neither of these options are an optimal results for IT. Red Hat Cloud Suite provides an easy on-ramp that allows a single person in operations to deploy a private cloud and it provides the path for ongoing management of that private cloud. This allows developers to begin using the private cloud more quickly and helps operations deliver a private cloud more quickly.

A Quick Summary

Here is a quick summary for those that just want the cliff notes.

The World is Changing

  • Businesses need a continuous competitive advantage
  • All businesses are software companies
  • Competition is everywhere

IT Needs

  • To increase relevance and reduce complexity
  • To create more agile processes and build programmable & scalable infrastructure and platforms

Red Hat Helps

  • Accelerate delivery
  • Optimize for efficiency
  • Modernize development and operations
  • Deliver scalable infrastructure

Only Red Hat Delivers

  • Innovation in the form of pure open source solutions
  • Integration with world class testing, support, and certification

References

[1] http://www.mckinsey.com/insights/business_technology/delivering_large-scale_it_projects_on_time_on_budget_and_on_value

[2] http://www.forbes.com/sites/benkepes/2015/03/04/new-stats-from-the-state-of-cloud-report/

[3] http://www.forbes.com/sites/louiscolumbus/2013/06/19/north-bridge-venture-partners-future-of-cloud-computing-survey-saas-still-the-dominant-cloud-platform/

[4] DevOps, Open Source, and Business Agility. Lessons Learned from Early Adopters. An IDC InfoBrief, sponsored by Red Hat | June 2015

[5] http://www.redhat.com/cms/public/RH_dev_containers_infographic_v1_0430clean_web.pdf

[6] https://puppetlabs.com/2014-devops-report

[7] https://www.gartner.com/doc/3022020/devops-bimodal-bridge

[8] http://www.gartner.com/it-glossary/bimodal

[9] http://www.redhat.com/en/resources/union-bank-migrates-unix-and-websphere-red-hat-and-jboss-solutions

http://www.redhat.com/en/resources/g-able-improves-resource-allocation-red-hat-solutions

[10] http://www.redhat.com/en/resources/cbts-enhances-customer-service-red-hat-cloudforms

[11] www.openshift.com/customers

[12] http://www.redhat.com/en/resources/morphlabs-reinvents-cloud-services-enterprise-ready-iaas-solution

[13] http://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph

[14] http://www.redhat.com/en/about/blog/red-hat-to-acquire-gluster

[15] http://www.redhat.com/en/about/press-releases/red-hat-acquire-feedhenry-adds-enterprise-mobile-application-platform

[16] http://www.infoworld.com/article/2627019/server-virtualization/red-hat-drops-xen-in-favor-of-kvm-in-rhel-6.html

[17] https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/

[18] https://access.redhat.com/articles/3078

Red Hat Cloud Suite for Applications

For those following our recent announcement, I put together a short blog post that explains why Red Hat Cloud Suite for Applications is the only on-premise complete and open source solution for accelerating application delivery at scale.

OPENSTACK DEPLOYMENT AND AUTOMATION

Here is the Presentation that Thomas, Chris, and I presented today at the OpenStack Meetup in Sunnyvale. Thanks to the folks at Walmart Labs for hosting us!

Tagged , , ,

A Technical Overview of Red Hat Cloud Infrastructure (RHCI)

I’m often asked for a more in-depth overview of Red Hat Cloud Infrastructure (RHCI), Red Hat’s fully open source and integrated Infrastructure-as-a-Service offering. To that end I decided to write a brief technical introduction to RHCI to help those interested better understand what a typical deployment looks like, how the components interact, what Red Hat has been working on to integrate the offering, and some common use cases that RHCI solves. RHCI gives organizations access to infrastructure and management to fit their needs, whether it’s managed datacenter virtualization, a scale-up virtualization-based cloud, or a scale-out OpenStack-based cloud. Organizations can choose what they need to run and re-allocate their resources accordingly.

001_overview

RHCI users can choose to deploy either Red Hat Enterprise Virtualization (RHEV) or Red Hat Enterprise Linux OpenStack Platform (RHEL-OSP) on physical systems to create a datacenter virtualization-based private cloud using RHEV or a private Infrastructure-as-a-Service cloud with RHELOSP.

RHEV comprises a hypervisor component, referred to as RHEV-H, and a manager, referred to as RHEV-M. Hypervisors leverage shared storage and common networks to provide common enterprise virtualization features such as high availability, live migration, etc.

RHEL-OSP is Red Hat’s OpenStack distribution that provides massively scalable infrastructure by providing the following projects (descriptions taken directly from the projects themselves) for use on one of the largest ecosystems of certified hardware and software vendors for OpenStack:

Nova: Implements services and associated libraries to provide massively scalable, on demand, self service access to compute resources, including bare metal, virtual machines, and containers.

Swift: Provides Object Storage.

Glance: Provides a service where users can upload and discover data assets that are meant to be used with other services, like images for Nova and templates for Heat.

Keystone: Facilitate API client authentication, service discovery, distributed multi-tenant authorization, and auditing.

Horizon: Provide an extensible unified web- based user interface for all integrated OpenStack services.

Neutron: Implements services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction.

Cinder: Implements services and libraries to provide on-demand, self-service access to Block Storage resources via abstraction and automation on top of other block storage devices.

Ceilometer: Reliably collects measurements of the utilization of the physical and virtual resources comprising deployed clouds, persist these data for subsequent retrieval and analysis, and trigger actions when defined criteria are met.

Heat: Orchestrates composite cloud applications using a declarative template format through an OpenStack-native ReST API.

Trove: Provides scalable and reliable Cloud Database as a Service functionality  for both relational and non-relational database engines, and to continue to improve its fully-featured and extensible open source framework.

Ironic: Produces an OpenStack service and associated python libraries capable of managing and provisioning physical machines, and to do this in a security-aware and fault-tolerant manner.

Sahara: Provides a scalable data processing stack and associated management interfaces.

Red Hat CloudForms, a Cloud Management Platform based on the upstream ManageIQ project, provides hybrid cloud management of OpenStack, RHEV, Microsoft Hyper-V, VMware vSphere, and Amazon Web Services. This includes the ability to provide rich self-service with workflow and approval, discovery of systems, policy definition, capacity and utilization forecasting, and chargeback among others capabilities. CloudForms is deployed as a virtual appliance and requires no agents on the systems it manages. CloudForms has a region and zone concept that allows for complex and federated deployments across large environments and geographies.

Red Hat Satellite is a systems management solution for managing the lifecycle of RHEV, RHEL-OSP, and CloudForms as well as any tenant workloads that are running on RHEV or RHEL-OSP. It can be deployed on bare metal or, as pictured in this diagram, as a virtual machine running on either RHEV or RHEL-OSP. Satellite supports a federated model through a concept called capsules.
002_cloudmanagement

CloudForms is a Cloud Management Platform that is deployed as a virtual appliance and supports a federated deployment. It is fully open source just as every component in RHCI is and is based on the ManageIQ project.

One of the key technical benefits CloudForms provides is unified management of multiple providers. CloudForms splits providers into two types. First, there are infrastructure providers such as RHEV, vSphere, and Microsoft Hyper-V. CloudForms discovers and provides uniform information about these systems hosts, clusters, virtual machines, and virtual machine contents in a single interface. Second, there are cloud providers such as RHEL-OSP and Amazon Web Services. CloudForms provides discovery and uniform information for these providers about virtual machines, images, flavors similar to the infrastructure providers. All this is done by leveraging standard APIs provided from RHEV-M, SCVMM, vCenter, AWS, and OpenStack.

003_lifecyclemanagement

Red Hat Satellite provides common systems management among all aspects of RHCI.

Red Hat Satellite provides content management, allowing users to synchronize content such as RPM packages for RHEV, RHEL-OSP, and CloudForms from Red Hat’s Content Delivery Network, to an on-premises Satellite reducing bandwidth consumption and providing an on-premises control point for content management through complex environments. Satellite also allows for configuration management via Puppet to ensure compliance and enforcement of proper configuration. Finally, Red Hat Satellite allows users to account for usage of assets through entitlement reporting and controls. Satellite provides these capabilities to RHEV, RHEL-OSP, and CloudForms, allowing administrators of RHCI to maintain their environment more effectively and efficiently. Equally as important is that Satellite also extends to the tenants of RHEV and RHEL-OSP to allow for systems management of Red Hat Enterprise Linux  (RHEL) based tenants. Satellite is based on the upstream projects of Foreman, Katello, Pulp, and Candlepin.

004_lifecyclemanagementtenant

The combination of CloudForms and Satellite is very powerful for automating not only the infrastructure, but within the operating system as well. Let’s look at an example of how CloudForms can be utilized with Satellite to provide automation of deployment and lifecycle management for tenants.

The automation engine in CloudForms is invoked when a user orders a catalog item from the CloudForms self-service catalog. CloudForms communicates with the appropriate infrastructure provider (in this case RHEV or RHEL-OSP pictured) to ensure that the infrastructure resources are created. At the same time it also ensures the appropriate records are created in Satellite so that the proper content and configuration will be applied to the system. Once the infrastructure resources are created (such as a virtual machine), they are connected to Satellite where they receive the appropriate content and configuration. Once this is completed, the service in CloudForms is updated with the appropriate information to reflect the state of the users request allowing them access to a fully compliant system with no manual interaction during configuration. Ongoing updates of the virtual machine resources can be performed by the end user or the administrator of the Satellite dependent on the customer needs.

005_servicelifecyclemanagement

This is another way of looking at how the functional areas of the workflow are divided in RHCI. Items such as the service catalog, quota enforcement, approvals, and workflow are handled in CloudForms, the cloud management platform. Even still, infrastructure-specific mechanisms such as heat templates, virtual machine templates, PXE, or even ISO-based deployment are utilized by the cloud management platform whenever possible. Finally, systems management is used to provide further customization within the operating system itself that is not covered by infrastructure specific provisioning systems. With this approach, users can separate operating system configuration from the infrastructure platform thus increasing portability. Likewise, operational decisions are decoupled from the infrastructure platform and placed in the cloud management platform allowing for greater flexibility and increased modularity.

006_sharedidentity

Common management is a big benefit that RHCI brings to organizations, but it doesn’t stop there. RHCI is bringing together the benefits of shared services to reduce the complexity for organizations. Identity is one of the services that can be made common across RHCI through the use of Identity Management (IDM) that is included in RHEL. All components of RHCI can be configured to talk to IDM which in turn can be used to authenticate and authorize users. Alternatively, and perhaps more frequently, a trust is established between IDM and Active Directory to allow for authentication via Active Directory. By providing a common identity store between the components of RHCI, administrators can ensure compliance through the use of access controls and audit.

007_sharednetwork

Similar to the benefits of shared identity, RHCI is bringing together a common network fabric for both traditional datacenter virtualization and infrastructure-as-a-service (IaaS) models. As part of the latest release of RHEV, users can now discover neutron networks and begin exposing them to guest virtual machines (in tech preview mode). By building a common network fabric organizations can simplify their architecture. No longer do they need to learn two different methods for creating and maintaining virtual networks.

008_sharedstorage

Finally, Image storage can now be shared between RHEV and RHEL-OSP. This means that templates and images stored in Glance can be used by RHEV. This reduces the amount of storage required to maintain the images and allows administrators to update images in one store instead of two, increasing operational efficiency.

009_capabilities

One often misunderstood area is around what capabilities are provided by which components of RHCI.  RHEV and OpenStack provide similar capabilities with different paradigms. These focus around compute, network, and storage virtualization. Many of the capabilities often associated with a private cloud include features found in the combination of Satellite and CloudForms. These include capabilities provided by CloudForms such as discovery, chargeback, monitoring, analytics, quota Enforcement, capacity planning, and governance. They also include capabilities that revolve around managing inside the guest operating system in areas such as content management, software distribution, configuration management, and governance.

010_deploymentscenarios

Often organizations are not certain about the best way to view OpenStack in relation to their datacenter virtualization solution. There are two common approaches that are considered. Within one approach, datacenter virtualization is placed underneath OpenStack. This approach has several negative aspects. First, it places OpenStack, which is intended for scale out, over an architecture that is designed for scale up in RHEV, vSphere, Hyper-V, etc. This gives organizations limited scalability and, in general, an expensive infrastructure for running a scale out IaaS private cloud. Second, layering OpenStack, a Cloud Infrastructure Platform, on top of yet another infrastructure management solution makes hybrid cloud management very difficult because Cloud Management Platforms, such as CloudForms, are not designed to relate OpenStack to a virtualization manager and then to underlying hypervisors. Conversely, by using a Cloud Management Platform as the aggregator between infrastructure platforms of OpenStack, RHEV, vSphere, and others, it is possible to achieve a working approach to hybrid cloud management and use OpenStack in the massively scalable way it is designed to be used.

011_vmware_rhev

RHCI is meant to complement existing investments in datacenter virtualization. For example, users often utilize CloudForms and Satellite to gain efficiencies within their vSphere environment while simultaneously increasing the cloud-like capabilities of their virtualization footprints through self-service and automation. Once users are comfortable with the self-service aspects of CloudForms, it is simple to supplement vSphere with lower cost or specialized virtualization providers like RHEV or Hyper-V.

This can be done by leveraging the virt-v2v tools (shown as option 1 in the diagram above) that perform binary conversion of images in an automated fashion from vSphere to other platforms. Another approach is to standardize environment builds within Satellite (shown as option 2 in the diagram above) to allow for portability during creation of a new workload. Both of these methods are supported based on an organization’s specific requirements.

012_vmware_openstack

For scale-out applications running on an existing datacenter virtualization solution such as VMware vSphere RHCI can provide organizations with the tools to identify (discover), and move (automated v2v conversion), workloads to Red Hat Enterprise Linux OpenStack Platform where they can take advantage of massive scalability and reduced infrastructure costs. This again can be done through binary conversion (option 1) using CloudForms  or through standardization of environments (option 2) using Red Hat Satellite.

013_management_integrations

So far I have focused primarily on the integrations between the components of Red Hat Cloud Infrastructure to illustrate how Red Hat is bringing together a comprehensive Infrastructure-as-a-Service solution, but RHCI integrates with many existing technologies within the management domain. From integrations with configuration management solutions such as Puppet, Chef, and Ansible, and many popular Configuration Management Databases (CMDBs) as well networking providers and IPAM systems, CloudForms and Satellite are extremely extensible to ensure that they can fit into existing environments.

014_infra_integrations

And of course, with Red Hat Enterprise Linux forming the basis of both Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform leading to one of the largest ecosystems of certified compute, network, and storage partners in the industry.

RHCI is a complete and fully open source infrastructure-as-a-service private cloud. It has industry leading integration between a datacenter virtualization and openstack based private cloud in the areas of networking, storage, and identity. A common management framework makes for efficient operations and unparallelled automation that can also span other providers. Finally, by leveraging RHEL and Systems Management and Cloud Management Platform based on upstream communities it has a large ecosystem of hardware and software partners for both infrastructure and management.

I hope this post helped you gain a better understanding of RHCI at a more technical level. Feel free to comment and be sure to follow me on twitter @jameslabocki

Tagged , , , , , , , , , ,