Category Archives: Architecture

Red Hat Cloud Suite: Modernizing Development and Operations

In a previous post I outlined the common problems organizations face across both their traditional IT environments and new emerging IT environments. These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands.
  • Optimizing traditional IT environments to increase efficiency.
  • Creating new development and operations practices for Emerging IT environment to  innovate faster.
  • Delivering public-cloud like infrastructure that is scalable and programmable.

Today’s applications are often monolithic and bringing applications from development to production is a painful and lengthy process. Even if applications are modernized, today’s scale-up infrastructure doesn’t provide the programmability and scalability required. Finally, organizations need to be able to operate and manage new modern applications and infrastructure seamlessly.

Red Hat Cloud Suite is an integrated solution for developing container based applications on massively scalable infrastructure with all the management required to operate both. With OpenShift Enterprise, organizations can build microservices based applications allowing for greater change velocity. Also, they can reduce friction between development and operations by using a continuous integration and deployment pipeline for release. Red Hat OpenStack Platform allows organizations to deliver massively scalable public-cloud like infrastructure based on OpenStack to support container based applications. Finally, Red Hat CloudForms provides seamless management of OpenShift and OpenStack along with other major virtualization, private, and public cloud infrastructures. Best of all, these are all built from leading open source communities without a line of proprietary code – ensuring access to the greatest amount of innovation. It also comes with access to Red Hat’s proactive operations offering, Red Hat Insights allowing you to compare your environment with the wisdom of thousands of solved problems and millions of support cases.

I’d like to show you a quick demonstration of how Red Hat Cloud Suite is helping organizations modernize their development and operations. In the demo below I demonstrate how an organization can develop applications faster, on scalable cloud infrastructure, with a single pane of management between both.

I hope you found this demonstration useful!

If you are a Red Hatter or a Red Hat Partner, this demonstration is available in the Red Hat Product Demo System and is named “Red Hat Cloud Suite Modernize Development and Operations”.

Tagged

Improving Service Design

In order to deliver services faster to their customers organizations need to ensure their development and operations teams work in unison. Over the last weeks and months I’ve been fortunate enough to spend time with several organizations in industries ranging from telecommunications, to financial services, to transportation to gain a better understanding of how they are allowing their development and operations teams to work more smoothly together and to observe where they are struggling. In this post I’ll attempt to summarize the common approaches and articulate the opportunity for improving service design.

Prior to explaining the common approaches that development and operations teams have implemented or plan to implement let’s examine two common ways development and operations work together when it comes to repeatability.

Using Repeatable Deployments
In this approach, a member of the development team (sometimes a developer, but often a system administrator within the development team itself) makes a request for a given service. This service may require approvals and fall under some type of governance, but it is automatically instantiated. Details about the service are then delivered back to the requester. The requester then controls the lifecycle of everything within the service, such as updating and configuring the application server and deploying the application itself.

Using Repeatable Builds
In this approach, a member of the development team (most often a developer) requests a given service. This service is automatically deployed and the requester is provided an endpoint in which to interact with the service and an endpoint (usually a source control system) to use for modification of the service. The requester is able to modify certain aspects of the service (most often the application code itself) and these modifications are automatically propagated to the instantiated service.

There are several patterns that I have observed with regards to the implementation of repeatability.

Repeatable Deployment
First is what I call “Repeatable Deployment”. It is probably familiar to anyone who has been in IT for the last 20 years. System administration teams deploy and configure the infrastructure components. This includes provisioning of the virtual infrastructure, such as virtual switches, storage, and machines and installing the operating system (or using a golden image). When it comes to containers, the system administration teams believe they will also provision the container and secure it in the low automation scenario.

repeatable deployment.png

Repeatable Deployments (High Level)

Once the infrastructure has been configured it is handed over to the application delivery teams. These teams deploy the necessary application servers to support the application that will run. This often includes application server clustering configuration and setting database connection pool information. These are things that developers and system administrators don’t necessarily know or care about. Finally, the application is deployed from a binary that is in a repository. This can be a vendor supplied binary or something that came from a build system that the development team created.

What is most often automated in this pattern is the deployment of the infrastructure and sometimes the deployment and configuration of the application server. The deployment of the binary is not often automated and the source to image process is altogether separated from the repeatable deployment of the infrastructure and application server.

Custom Repeatable Build
Next is what I call “Custom Repeatable Build”. This pattern is common in organizations that began automating prior to the emergence of prescriptive methods of automation or because they had other reasons like flexibility or expertise that they wanted to leverage.

custom repeatable build.png

Custom Repeatable Build (High Level)

In this pattern system administration teams are still responsible for deploying and configuring the infrastructure including storage, network, and compute as a virtual machine or container. This is often automated using popular tools. This infrastructure is then handed over as an available “unit” for the application delivery teams.

The application delivery teams in this pattern have taken ownership of the process of taking source to image and configuring the application server and delivering binary to the application server. This is done through the use of a configuration management or automation platform.

This pattern greatly decreases the time it takes to move code from development to operational. However, the knowledge required to create the source to image process and automate the deployment is high. As more application development teams are onboarded the resulting complexity also greatly increases. In one instance it took over 3 months to onboard a single application into this pattern. In a large environment with thousands of applications this would be untenable.

Prescriptive Repeatable Build
Finally, there is what I call the “Prescriptive Repeatable Build”. Similar to the other patterns the infrastructure including storage, network, and compute as a virtual machine or container are provisioned and configured by the system administration teams. Once this is complete a PaaS team provisions the PaaS using the infrastructure resources.

prescriptive repeatable build.png

Prescriptive Repeatable Build (High Level)

The PaaS exposes a self-service user interface for developers to obtain a new environment. Along with an endpoint for the instantiated service the developer is provided with an endpoint for manipulating the configuration of the service (usually the application code itself, but sometimes other aspects as well).

General Trends
Most organizations have multiple service design and delivery patterns happening at the same time. Most also want to move from repeatable deployment to prescriptive repeatable build wherever possible. This is because high automation can generally be equated with delivery of more applications in a shorter period of time. It also provides a greater degree of standardization thus reducing complexity.

Pains, Challenges, and Limitations
There are several pains, challenges, and limitations within each pattern.

The Repeatable Deployment pattern is generally the easiest to implement of the three. It generally includes automating infrastructure deployments and sometimes even automates the configuration of the application servers. Still, the disconnected nature of development from the deployment, configuration, and operation of the application itself means the repeatable deployment does not provide as much value when it comes to delivering applications faster. It also tends to lead to greater amounts of human error when deploying applications since it often relies on tribal knowledge or manual tasks between development and operations to run an application.

The Custom Repeatable Builds provide end users with a means of updating their applications automatically. This pattern also accommodates the existing source to image process, developer experience, and business requirements without requiring large amounts of change. This flexibility does come with a downside in that it takes a long time to onboard tenants. As mentioned earlier, we found that it can take months to onboard a simple application. Since large organizations typically have thousands of applications and potentially hundreds of development teams this pattern also leads to an explosion of complexity.

The Prescriptive Repeatable Builds provide the most standardization and allows developers to take source to a running application quickly. However, it often requires a significant effort to change the build process to fit into it’s opinionated deployment style. This leads to a larger risk to user acceptance depending on how application development teams behave in an organization. In using an opinionated method, however, the complexity of the end state can be reduced.

Finally, moving between each of these patterns is painful for organizations. In most cases, it is impossible to leverage existing investments from one pattern in another. This means redesigning and reimplementing service design each time.

How do Organizations Decide which Approach is Best?
Deciding which pattern is best is dependent on many factors including (but not limited to):

  • In-house skills
  • Homogeneity of application development processes
  • Business requirements driving application cycle time
  • Application architecture
  • Rigidity or flexibility of IT governance processes

The difference in pattern used on a per application basis is often the reason multiple patterns exist inside large organizations. For example, in a large organization that has grown through merger and acquisition there may be some application delivery teams that are building a Platform as a Service (PaaS) to enable prescriptive repeatable builds while others are using repeatable deployments and still others have hand crafted customized repeatable builds.

Principles for a Successful Service Design Solution
We have identified several principles that we believe a good service design solution should adhere to. This is not an exhaustive list. In no priority order they are:

  • Create Separation of Service Design and Element Authoring
    Each platform required to deliver either a repeatable deployment or repeatable build exposes its own set of elements. Infrastructure platforms might expose a Domain Specific Language (DSL) for describing compute, networking, and storage. Build systems may expose software projects and jobs. Automation and/or configuration management platforms may expose their own DSL. The list goes on. Each of the platforms has experts that author these. For example, an OpenShift PaaS platform expert will likely author a Kubernetes template. The service designer will not understand how to author this, but will need to discover the template and use it in the composition of a service. In other words: relate elements, don’t create elements.
  • Provide Support for All Patterns
    Not a single observed organization had one pattern. In fact, they all had multiple teams using multiple patterns. Solutions should take into account all the patterns observed or else they will fail to gain traction inside of organizations and provide efficiency.
  • Allow for Evolution
    Elements used in one pattern for service design should be able to be used in other service design patterns. For example, a virtual machine should be able to be a target for a repeatable deployment as well as a repeatable build. Failure to adhere to this will reduce the value of the solution as it will cause a high degree of duplication for end users.
  • Provide Insight
    We discovered that there were sections of automation in all the patterns that could be refactored into a declarative model (DSL) and away from an imperative model (workflow), but the designers of the service were either not aware of this or did not understand how to make the change. By providing insights to the service design about how to factor their service designs into the most declarative and portable model possible the solution could provide the most efficient and maintainable service design solution. Pattern recognition tools should be considered.

Improved Service Design Concept
An opportunity exists to greatly improve the service design experience. It can be possible to allow service designers to more easily design services using the widest range of items, accommodating the patterns required by the multiple teams inside most organization all while allowing the organization to evolve towards the prescriptive repeatable build pattern without losing their investments along the way. This concept allows for “services as code” and would provide a visual editor.

The concept begins with the discovery of existing elements within an organization’s platforms. For example, the discovery of a heat template from an OpenStack based IaaS platform or discovery of available repositories from a content system such as Nexus, Artifactory, or Red Hat Satellite. Discovery of these elements on a continual basis and ensuring they are placed into a source control system (or leveraging them from their existing source control systems).

discovery.png

Discovery of Element for Service Composition

Once the elements are discovered and populated the visual editor can allow for the service designer to author a new service composition. This service composition would never create elements, but would describe how the elements are related.

composition.png

Composition of Elements

While out of scope for the concept of service design the service composition could be visualized to the service consumer within any number of service catalogs that can read the service composition. Also outside of the scope of service design (although also important), brokers can utilize the service composition to instantiate a running instance of the service across the platforms.

instantiation.png

Publishing and Instantiation

Why does this matter to Red Hat?
Red Hat has a unique opportunity to provide a uniform way of designing services across all three patterns using both their products as well as other leading solutions in the market. In providing a uniform way it would increase the usability and understanding between teams within our customers and allow for an easier transition between the patterns of repeatability while still allowing users to choose what pattern is right for them. This means a reduction in friction when introducing repeatable service delivery solutions. This would directly benefit products that provide repeatable deployments such as Ansible, Satellite, and CloudForms by improving the user experience. Then, as a customer matures, the concepts discussed here would provide them with an evolutionary path to repeatable builds that would not require reengineering a solution. This would greatly benefit products such as OpenStack and OpenShift.

What’s Next?
Currently, we are working through the user experience for designing a sample application within the concept in more detail. Once we complete this we hope to build a functional prototype of the concept and continue to improve the design to obtain market validation. 

If you are a user that is interested in participating in our research or participating in co-creation please email me at jlabocki <at> redhat.com.

 

Scalable Infrastructure

In a previous post I outlined the common problems organizations face across both their traditional IT environments (sometimes called mode-1) and new emerging IT environments (sometimes called mode-2). These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands
  • Optimizing traditional IT environments to increase efficiency
  • Creating new development and operations practices for Emerging IT environment to  innovate faster
  • Delivering public-cloud like infrastructure that is scalable and programmable

I’d like to show you a quick demonstration of how Red Hat is delivering scalable infrastructure with the capabilities that enterprises demand. Red Hat Enterprise Linux OpenStack Platform delivers scale-out private cloud capabilities with a stable lifecycle and large ecosystem of supported hardware platforms. Many organizations are building their next generation cloud infrastructures on OpenStack because it provides an asynchronous architecture and is API centric allowing for greater scale and greater efficiency in platform management. OpenStack does not, however, provide functionality such as chargeback, reporting, and policy driven automation for tenant workloads and those projects that aspire to do so are generally focused solely on OpenStack. This is not realistic in an increasingly hybrid world – and enterprises that are serious about OpenStack need these capabilities. By using Red Hat CloudForms together with Red Hat Enterprise Linux OpenStack Platform it’s possible to provide capabilities such as reporting, chargeback, and auditing of tenant workloads across a geographically diverse deployment. In the demo below I demonstrate how chargeback across a multi-site OpenStack deployment works.

I hope you found this demonstration useful!

P.S. – If you are a Red Hatter or a Red Hat Partner, this demonstration is available in the Red Hat Product Demo System and is named “Red Hat Cloud Suite Reporting Demonstration”.

Tagged , , , , , ,

Optimizing IT

In a previous post I outlined the common problems organizations face across both their traditional IT environments (sometimes called mode-1) and new emerging IT environments (sometimes called mode-2). These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands
  • Optimizing traditional IT environments to increase efficiency
  • Creating new development and operations practices for Emerging IT environment to  innovate faster
  • Delivering public-cloud like infrastructure that is scalable and programmable

I’d like to show you a quick demonstration of how Red Hat is helping optimize traditional IT environments. There are many ways in which Red Hat does this, from discovering and right sizing virtual machines to free up space in virtual datacenters, to creating a standard operating environment across heterogeneous environments to reduce complexity. In this demonstration, however, I’ll focus on how Red Hat enables organizations to migrate workloads to their ideal platform. In the demonstration video below you’ll see how using tools found in Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform in conjunction with automation and orchestration from Red Hat CloudForms it’s possible to migrate virtual machines in an automated fashion from VMware vSphere to either RHEV or Red Hat Enterprise Linux OpenStack Platform. Keep in mind, these tools assist with the migration process, but need to be designed for your specific environment. That said, they can greatly reduce the time and effort required to move large amounts of virtual machines once designed.

I hope you found this demonstration useful!

P.S. – If you are a Red Hatter or a Red Hat Partner, this demonstration is available in the Red Hat Product Demo System and is named “Red Hat Cloud Suite Migration Demonstration”.

Tagged , , , , , ,

Accelerating Service Delivery Demonstration

In a previous post I outlined the common problems organizations face across both their traditional IT environments (sometimes called mode-1) and new emerging IT environments (sometimes called mode-2). These included:

  • Accelerating the delivery of services in traditional IT Environments to satisfy customer demands
  • Optimizing traditional IT environments to increase efficiency
  • Creating new development and operations practices for Emerging IT environment to  innovate faster
  • Delivering public-cloud like infrastructure that is scalable and programmable

I’d like to show you a quick demonstration of how Red Hat is helping accelerate service delivery for traditional IT environments. Developers or line of business users request stacks daily to create new services or test functionality. Each of these requests results in lots of work being done by operations and security teams. From creating virtual machines, to installing application servers, and even securing the systems – these tasks take time away from valuable resources that could be doing something else (like building out the next generation platform for development and operations). There are many solutions that exist for automating the deployment of virtual machines or the applications inside of the virtual machines, but Red Hat is uniquely positioned to automate both of these. By leveraging Red Hat CloudForms in conjunction with Red Hat Satellite it is possible to create a re-usable description for your application that can be automatically deployed via self-service with governance and controls across a hybrid cloud infrastructure. In the demonstration below we show the self-service automated deployment of a wordpress application consisting of HAProxy, 2 WordPress application servers, and a MariaDB database across both VMware vSphere and Red Hat Enterprise Virtualization.

P.S. – If you are a Red Hatter or a Red Hat Partner this demonstration is available in the Red Hat Product Demo System under the name “Red Hat Cloud Suite Deployment Demo”.

Tagged , , ,

Can Strategic Design Improve the Design and User Experience Across Open Source Communities?

If you speak to anyone involved in Information Technology there is little debate that an open source development model is the defacto development model for the next generation of technology. Cloud infrastructure with OpenStack, continuous integration with Jenkins, containers with Docker, automation with Ansible – these areas are all being transformed with technologies delivered via the open source development model. Even Microsoft has begun to embrace the open source development model (albeit sometimes only partially).

The use of an open source development model as a default is good news for everyone. Users (and organizations) are obtaining access to the greatest amount of innovation and can participate in development. At the same time, developers are able to increase their productivity and gain access to more collaborators. As organizations look to adopt technologies based on open source they often realize that it’s easier to purchase open source software rather than obtaining it directly from the community. There are many reasons including support, cost, focus on core business, and even indemnification that make it beneficial to purchase rather than consume directly from the community. Red Hat (where I work) is an example of a company that has provided software in exactly this way.

This model works well when organizations are using a single product based on a single open source community project. However, the open source development model poses challenges to creating a cohesive design and user experience across multiple products derived from open source projects. This problem ultimately affects the design and experience of the products organizations buy. In order for open source to continue it’s success in becoming the defacto standard a solution for the problem of coordinating design and user experience across multiple communities needs to solved.

The challenge of solving this problem should not be underestimated. If you think that influencing a single open source community is difficult, you can imagine how challenging it is to influence multiple communities in a coordinated manner. Developers in communities are purpose driven, wildly focused on their problem, and are focused on incremental progress. These developers justifiably shy away from grand plans, large product requirements documents, and any forceful attempts to change what they are building. After all, that’s the way monolithic, proprietary software lost to the fast moving and modular open source development model.

What is needed is a way to illustrate to development and community leaders how they can better satisfy their problem by working well with other communities and allow the community leaders to conclude on their own that they should work in the illustrated manner.

By practicing strategic design it may be possible to provide the clarity of vision and reasoning required to effectively influence multiple open source communities to work more cohesively. Strategic design is is the application of future-oriented design principles in order to increase an organization’s innovative and competitive qualities. Tim Brown, CEO of IDEO, summarizes some the purposes of design thinking (a major element of strategic design) really well in his Strategy by Design article.

At Red Hat we have recently begun testing this theory by organizing a design practice trial. The design practice trial team consisted of experts on various technologies from the field and engineering along with user experience experts.

IMG_2282

Part of the workshop trial team (During the final exercise on Day 3)

The premise for our design practice trial was simple:

•       Identify a common problem taking place across our customers using multiple open source products.
•       Analyze how the problem can be solved using the products.
•       Conceptualize an ideal user experience starting from a blank slate.
•       Share what was discovered with community leaders and product managers to assist with incremental improvement and influence direction toward the ideal user experience.

Identify
The common problem we found was organizations struggling to design re-usable services effectively.  The persona we identified for our trial was the service designer. The service designer identifies, qualifies, builds, and manages entries in a service catalog for self-service consumption by consumers. The service designer would like to easily design and publish re-usable entries in a catalog from the widest range of possible items.

Analyze
The products we used to analyze the current user experience are:

•       OpenStack to deliver Infrastructure as a Service (IaaS)
•       OpenShift to deliver Platform as a Service (PaaS)
•       ManageIQ (CloudForms) to deliver a Cloud Management Platform (CMP)
•       Pulp and Candlepin (Satellite) to deliver a Content Mirroring and Entitlement
•       Ansible (Tower) to deliver automation

Looking across our communities we find lots of “items” in each project. OpenStack provides Heat Templates, OpenShift provides Kubernetes Templates, Ansible provides play books, and on and on. In addition to this, the service designer would likely want to mix items from outside of these projects for use in a catalog entry, including public cloud and SaaS services.

What would it look like for the service designer to assemble all these items into an entry that could be ordered by a consumer that leads to a repeatable deployment? During a 3-day workshop we put ourselves in the shoes of the service designer and attempted to design an application.

Screen Shot 2016-01-29 at 1.49.03 PM

The Example Application

The team was able to design the catalog entry in about 8 hours and we feel we could have done this even faster if we weren’t so new to Ansible Tower.

Here is a quick video demonstration of deploying this application from the catalog as a finished product.

Or in MP4

I’ll spare every one the detailed results here (if you work at Red Hat send me a note and I’ll link you to our more complete analysis), but the exercise of analyzing the current solution allowed us to identify many areas for incremental improvement when using these products together to satisfy the use case of the service designer. We also identified longer term design questions that need to be resolved between the products (and ultimately, the upstream projects).

Conceptualize
What would the ideal user experience be for this? Another exercise we performed in our workshop was designing an ideal user experience starting from a blank slate. This included challenging assumptions while still defining some constraints of the concept. This proved to be challenging and eye opening for everyone involved. Starting from scratch with a design and not worrying about the underlying engineering that would be required is difficult for engineering minded individuals. We began developing what we believe the ideal user experience for service design would be. These will be worked into a workflow and low fidelity mockups to illustrate the basics of the experience.

Share
As next steps we will share our findings with community leaders and product managers in the hopes that it positively impacts the design and user experience. We will also continue meeting with customers who we believe suffer from the service design problem to continue to refine our proposed ideal design and to help them understand how our current solution works. If all goes well, we might even attempt a prototype or mock user interface to start. There are plenty of other angles we need to address, such as the viability of customer adoption and customer willingness to pay for such a concept. For a 3-day workshop (and lots of prep before), however, we feel we are off to a good start.

Will the community leaders accept our assertions that they can deliver a better experience by working together? Will any of the concepts and prototypes make it into the hands of customers? This remains to be seen. My hunch is that none of the communities will accept the exact conceptual design and user experience put forth by the Design Practice, but the conceptual design and user experience will positively influence the design and user experience within the open source communities and ultimately make it’s way to customers via Red Hat’s products and solutions. In any case, the more time we spend practicing design, the better the lives of our customers will become.

-James
@jameslabocki

 

A Technical Overview of Red Hat Cloud Infrastructure (RHCI)

I’m often asked for a more in-depth overview of Red Hat Cloud Infrastructure (RHCI), Red Hat’s fully open source and integrated Infrastructure-as-a-Service offering. To that end I decided to write a brief technical introduction to RHCI to help those interested better understand what a typical deployment looks like, how the components interact, what Red Hat has been working on to integrate the offering, and some common use cases that RHCI solves. RHCI gives organizations access to infrastructure and management to fit their needs, whether it’s managed datacenter virtualization, a scale-up virtualization-based cloud, or a scale-out OpenStack-based cloud. Organizations can choose what they need to run and re-allocate their resources accordingly.

001_overview

RHCI users can choose to deploy either Red Hat Enterprise Virtualization (RHEV) or Red Hat Enterprise Linux OpenStack Platform (RHEL-OSP) on physical systems to create a datacenter virtualization-based private cloud using RHEV or a private Infrastructure-as-a-Service cloud with RHELOSP.

RHEV comprises a hypervisor component, referred to as RHEV-H, and a manager, referred to as RHEV-M. Hypervisors leverage shared storage and common networks to provide common enterprise virtualization features such as high availability, live migration, etc.

RHEL-OSP is Red Hat’s OpenStack distribution that provides massively scalable infrastructure by providing the following projects (descriptions taken directly from the projects themselves) for use on one of the largest ecosystems of certified hardware and software vendors for OpenStack:

Nova: Implements services and associated libraries to provide massively scalable, on demand, self service access to compute resources, including bare metal, virtual machines, and containers.

Swift: Provides Object Storage.

Glance: Provides a service where users can upload and discover data assets that are meant to be used with other services, like images for Nova and templates for Heat.

Keystone: Facilitate API client authentication, service discovery, distributed multi-tenant authorization, and auditing.

Horizon: Provide an extensible unified web- based user interface for all integrated OpenStack services.

Neutron: Implements services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction.

Cinder: Implements services and libraries to provide on-demand, self-service access to Block Storage resources via abstraction and automation on top of other block storage devices.

Ceilometer: Reliably collects measurements of the utilization of the physical and virtual resources comprising deployed clouds, persist these data for subsequent retrieval and analysis, and trigger actions when defined criteria are met.

Heat: Orchestrates composite cloud applications using a declarative template format through an OpenStack-native ReST API.

Trove: Provides scalable and reliable Cloud Database as a Service functionality  for both relational and non-relational database engines, and to continue to improve its fully-featured and extensible open source framework.

Ironic: Produces an OpenStack service and associated python libraries capable of managing and provisioning physical machines, and to do this in a security-aware and fault-tolerant manner.

Sahara: Provides a scalable data processing stack and associated management interfaces.

Red Hat CloudForms, a Cloud Management Platform based on the upstream ManageIQ project, provides hybrid cloud management of OpenStack, RHEV, Microsoft Hyper-V, VMware vSphere, and Amazon Web Services. This includes the ability to provide rich self-service with workflow and approval, discovery of systems, policy definition, capacity and utilization forecasting, and chargeback among others capabilities. CloudForms is deployed as a virtual appliance and requires no agents on the systems it manages. CloudForms has a region and zone concept that allows for complex and federated deployments across large environments and geographies.

Red Hat Satellite is a systems management solution for managing the lifecycle of RHEV, RHEL-OSP, and CloudForms as well as any tenant workloads that are running on RHEV or RHEL-OSP. It can be deployed on bare metal or, as pictured in this diagram, as a virtual machine running on either RHEV or RHEL-OSP. Satellite supports a federated model through a concept called capsules.
002_cloudmanagement

CloudForms is a Cloud Management Platform that is deployed as a virtual appliance and supports a federated deployment. It is fully open source just as every component in RHCI is and is based on the ManageIQ project.

One of the key technical benefits CloudForms provides is unified management of multiple providers. CloudForms splits providers into two types. First, there are infrastructure providers such as RHEV, vSphere, and Microsoft Hyper-V. CloudForms discovers and provides uniform information about these systems hosts, clusters, virtual machines, and virtual machine contents in a single interface. Second, there are cloud providers such as RHEL-OSP and Amazon Web Services. CloudForms provides discovery and uniform information for these providers about virtual machines, images, flavors similar to the infrastructure providers. All this is done by leveraging standard APIs provided from RHEV-M, SCVMM, vCenter, AWS, and OpenStack.

003_lifecyclemanagement

Red Hat Satellite provides common systems management among all aspects of RHCI.

Red Hat Satellite provides content management, allowing users to synchronize content such as RPM packages for RHEV, RHEL-OSP, and CloudForms from Red Hat’s Content Delivery Network, to an on-premises Satellite reducing bandwidth consumption and providing an on-premises control point for content management through complex environments. Satellite also allows for configuration management via Puppet to ensure compliance and enforcement of proper configuration. Finally, Red Hat Satellite allows users to account for usage of assets through entitlement reporting and controls. Satellite provides these capabilities to RHEV, RHEL-OSP, and CloudForms, allowing administrators of RHCI to maintain their environment more effectively and efficiently. Equally as important is that Satellite also extends to the tenants of RHEV and RHEL-OSP to allow for systems management of Red Hat Enterprise Linux  (RHEL) based tenants. Satellite is based on the upstream projects of Foreman, Katello, Pulp, and Candlepin.

004_lifecyclemanagementtenant

The combination of CloudForms and Satellite is very powerful for automating not only the infrastructure, but within the operating system as well. Let’s look at an example of how CloudForms can be utilized with Satellite to provide automation of deployment and lifecycle management for tenants.

The automation engine in CloudForms is invoked when a user orders a catalog item from the CloudForms self-service catalog. CloudForms communicates with the appropriate infrastructure provider (in this case RHEV or RHEL-OSP pictured) to ensure that the infrastructure resources are created. At the same time it also ensures the appropriate records are created in Satellite so that the proper content and configuration will be applied to the system. Once the infrastructure resources are created (such as a virtual machine), they are connected to Satellite where they receive the appropriate content and configuration. Once this is completed, the service in CloudForms is updated with the appropriate information to reflect the state of the users request allowing them access to a fully compliant system with no manual interaction during configuration. Ongoing updates of the virtual machine resources can be performed by the end user or the administrator of the Satellite dependent on the customer needs.

005_servicelifecyclemanagement

This is another way of looking at how the functional areas of the workflow are divided in RHCI. Items such as the service catalog, quota enforcement, approvals, and workflow are handled in CloudForms, the cloud management platform. Even still, infrastructure-specific mechanisms such as heat templates, virtual machine templates, PXE, or even ISO-based deployment are utilized by the cloud management platform whenever possible. Finally, systems management is used to provide further customization within the operating system itself that is not covered by infrastructure specific provisioning systems. With this approach, users can separate operating system configuration from the infrastructure platform thus increasing portability. Likewise, operational decisions are decoupled from the infrastructure platform and placed in the cloud management platform allowing for greater flexibility and increased modularity.

006_sharedidentity

Common management is a big benefit that RHCI brings to organizations, but it doesn’t stop there. RHCI is bringing together the benefits of shared services to reduce the complexity for organizations. Identity is one of the services that can be made common across RHCI through the use of Identity Management (IDM) that is included in RHEL. All components of RHCI can be configured to talk to IDM which in turn can be used to authenticate and authorize users. Alternatively, and perhaps more frequently, a trust is established between IDM and Active Directory to allow for authentication via Active Directory. By providing a common identity store between the components of RHCI, administrators can ensure compliance through the use of access controls and audit.

007_sharednetwork

Similar to the benefits of shared identity, RHCI is bringing together a common network fabric for both traditional datacenter virtualization and infrastructure-as-a-service (IaaS) models. As part of the latest release of RHEV, users can now discover neutron networks and begin exposing them to guest virtual machines (in tech preview mode). By building a common network fabric organizations can simplify their architecture. No longer do they need to learn two different methods for creating and maintaining virtual networks.

008_sharedstorage

Finally, Image storage can now be shared between RHEV and RHEL-OSP. This means that templates and images stored in Glance can be used by RHEV. This reduces the amount of storage required to maintain the images and allows administrators to update images in one store instead of two, increasing operational efficiency.

009_capabilities

One often misunderstood area is around what capabilities are provided by which components of RHCI.  RHEV and OpenStack provide similar capabilities with different paradigms. These focus around compute, network, and storage virtualization. Many of the capabilities often associated with a private cloud include features found in the combination of Satellite and CloudForms. These include capabilities provided by CloudForms such as discovery, chargeback, monitoring, analytics, quota Enforcement, capacity planning, and governance. They also include capabilities that revolve around managing inside the guest operating system in areas such as content management, software distribution, configuration management, and governance.

010_deploymentscenarios

Often organizations are not certain about the best way to view OpenStack in relation to their datacenter virtualization solution. There are two common approaches that are considered. Within one approach, datacenter virtualization is placed underneath OpenStack. This approach has several negative aspects. First, it places OpenStack, which is intended for scale out, over an architecture that is designed for scale up in RHEV, vSphere, Hyper-V, etc. This gives organizations limited scalability and, in general, an expensive infrastructure for running a scale out IaaS private cloud. Second, layering OpenStack, a Cloud Infrastructure Platform, on top of yet another infrastructure management solution makes hybrid cloud management very difficult because Cloud Management Platforms, such as CloudForms, are not designed to relate OpenStack to a virtualization manager and then to underlying hypervisors. Conversely, by using a Cloud Management Platform as the aggregator between infrastructure platforms of OpenStack, RHEV, vSphere, and others, it is possible to achieve a working approach to hybrid cloud management and use OpenStack in the massively scalable way it is designed to be used.

011_vmware_rhev

RHCI is meant to complement existing investments in datacenter virtualization. For example, users often utilize CloudForms and Satellite to gain efficiencies within their vSphere environment while simultaneously increasing the cloud-like capabilities of their virtualization footprints through self-service and automation. Once users are comfortable with the self-service aspects of CloudForms, it is simple to supplement vSphere with lower cost or specialized virtualization providers like RHEV or Hyper-V.

This can be done by leveraging the virt-v2v tools (shown as option 1 in the diagram above) that perform binary conversion of images in an automated fashion from vSphere to other platforms. Another approach is to standardize environment builds within Satellite (shown as option 2 in the diagram above) to allow for portability during creation of a new workload. Both of these methods are supported based on an organization’s specific requirements.

012_vmware_openstack

For scale-out applications running on an existing datacenter virtualization solution such as VMware vSphere RHCI can provide organizations with the tools to identify (discover), and move (automated v2v conversion), workloads to Red Hat Enterprise Linux OpenStack Platform where they can take advantage of massive scalability and reduced infrastructure costs. This again can be done through binary conversion (option 1) using CloudForms  or through standardization of environments (option 2) using Red Hat Satellite.

013_management_integrations

So far I have focused primarily on the integrations between the components of Red Hat Cloud Infrastructure to illustrate how Red Hat is bringing together a comprehensive Infrastructure-as-a-Service solution, but RHCI integrates with many existing technologies within the management domain. From integrations with configuration management solutions such as Puppet, Chef, and Ansible, and many popular Configuration Management Databases (CMDBs) as well networking providers and IPAM systems, CloudForms and Satellite are extremely extensible to ensure that they can fit into existing environments.

014_infra_integrations

And of course, with Red Hat Enterprise Linux forming the basis of both Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform leading to one of the largest ecosystems of certified compute, network, and storage partners in the industry.

RHCI is a complete and fully open source infrastructure-as-a-service private cloud. It has industry leading integration between a datacenter virtualization and openstack based private cloud in the areas of networking, storage, and identity. A common management framework makes for efficient operations and unparallelled automation that can also span other providers. Finally, by leveraging RHEL and Systems Management and Cloud Management Platform based on upstream communities it has a large ecosystem of hardware and software partners for both infrastructure and management.

I hope this post helped you gain a better understanding of RHCI at a more technical level. Feel free to comment and be sure to follow me on twitter @jameslabocki

Tagged , , , , , , , , , ,

Deploying OpenShift with CloudForms Presentation

Slides from my talk on Deploying OpenShift with CloudForms can be downloaded here.

Red Hat’s Open Hybrid Cloud Architecture

IT consumers traditionally satisfied their requirements for services by utilizing their internal IT departments. The type of service consumed has evolved over time. Most recently consumption is dominated by the service of virtual machines. More advanced internal IT departments may include even more service oriented consumption to IT consumers in the form of standardized application stacks running on top of virtual machines. The process of procuring such services could take days, weeks, or even months for an internal IT department. Length of procurement can be attributed to the complex architectures as well as business requirements, such as governance and compliance, that are required to be followed by IT organizations.

02

In the search to innovate faster, IT consumers have begun to recognize the value of public clouds to more quickly provide the services they need. Whether it is Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), IT consumers began to utilize these public cloud providers. IT consumers enjoyed increased agility and a consumption model that allowed them to utilize computing as a utility. While using public cloud providers is appropriate for certain workloads, IT organizations have struggled to maintain compliance, governance, and control over businesses critical assets in the public cloud. At the same time, IT consumers expectations of what IT organization should provide have dramatically increased.

03

The increased expectations of the IT consumer are being transferred to the IT organization in the form of increased demands. Increased demand for self-service, elastic infrastructure and applications, the ability to more rapidly deliver environments, and accelerated application development are some of the specific demands being driven by the experience the IT consumer has had while using the public cloud. The IT consumer is losing patience with IT organizations and the threat of shadow IT organizations is real. IT organizations would like to deliver these capabilities to the IT consumer and would like to maintain their operational practices over the delivery of such capabilities. IT organizations also recognize that the shift to a next generation IT architecture is an opportunity to make strategic decisions to both simplify their IT architecture and address concerns that have been plaguing them in the architectures of the past. These strategic decisions include embracing an architecture that provides choice, agility, openness, and leverages existing investments.

04

Choice is important to Operations and Developers
Operations teams need the ability to deploy workloads on a choice of infrastructure providers with the ability to seamlessly manage workloads once deployed. Without the ability to easily deploy and move workloads from one infrastructure provider to another the operations teams are stuck using a single infrastructure provider. Being locked in to a single infrastructure provider prohibits operations teams from leveraging innovation from other providers or choosing the right provider for the right workload. Development teams also require choice. A broad choice of languages and frameworks and support for polyglot, poly-framework applications is an expectation of development teams because each language and framework is providing important innovations that can be assembled to solve complex business problems efficiently in a way that a single language alone cannot solve.

05
Agility and Openness are critical to maintaining relevance with the IT consumer

Agility will allow IT organizations to remain relevant with the IT consumer. By being able to quickly provide new languages, frameworks, and solutions to complex problems IT operations can become a strategic partner to the IT consumer instead of being viewed as simply an expense. In choosing a next generation IT architecture that is based on openness, IT organizations can ensure that future innovation can be more easily adopted, and ensure that future investments are more easily consumable then today’s architectures.
Leverage existing investments alongside a Next Generation Architecture

IT organizations have invested heavily in the current IT architectures and the next generation IT architecture needs to leverage those existing investments. Meanwhile, IT consumers are requesting specific capabilities from IT organizations as a result of their experience with public cloud providers that are not available in current IT architectures.

Red Hat’s Open Hybrid Cloud Architecture provides these capabilities today while balancing the strategic requirements IT organizations need in their next generation IT architecture. It all starts with a federated, highly scalable, and extensible operational management platform for cloud which provides discovery, capacity planning, reporting, audit and compliance, analytics, monitoring, orchestration, policy, and chargeback functionality. These capabilities are extended throughout all aspects of the Open Hybrid Cloud Architecture to provide a unified approach to management through a single pane of glass.

06

Within the infrastructure layer existing investments in physical systems and datacenter virtualization platforms can be unified with the next generation IT architectures of IaaS private and public clouds. Existing investments in application architectures can be managed in their existing environments through a single pane of glass which also provides insight into next generation IT architectures of private and public PaaS platforms.

07

The Open Hybrid Cloud Architecture’s operational management platform goes beyond a remedial understanding of deploying workloads to providers. The operational management platform is extended to provide deep levels of integration with automation frameworks in both the infrastructure and application layers. By leveraging these automation frameworks The Open Hybrid Cloud Architecture allows for new levels of flexibility and efficiency in workload placement and analysis. The approach of deep integration of loosely coupled systems forms the basis by which IT organizations can provide the IT consumer with the capabilities they have come to expect through their use of public clouds without building a cloud silo.

08

Elastic Infrastructure

Red Hat’s Open Hybrid Cloud Architecture provides elastic infrastructure via it’s Infrastructure as a Service (IaaS) component and related Infrastructure Automation capabilities. The Open Hybrid Cloud Architecture not only provides elastic infrastructure via IaaS, but also provides consistent management across a broad range of other infrastructure including physical systems, datacenter virtualization, and IaaS public clouds. This allows IT organizations to leverage the benefits of cloud computing across their existing investments and provides a single pane of glass view of their resources. This comprehensive view of all computing resources provides the information IT organizations need to optimize workload placement. For example, with capacity and utilization data from workloads running in datacenter virtualization platforms IT organizations can determine which workloads are the best targets for moving to IaaS clouds, both private and public. Without a comprehensive view of all computing resources elastic infrastructure based on IaaS alone is yet another silo of management for IT organizations.

Elastic Applications
The benefits of cloud economics cannot be realized through elastic infrastructure alone, but require applications and application platforms. Next generation applications must be designed with the core tenants of cloud computing in mind in order to take advantage of underlying elastic infrastructure. By allowing users to develop elastic applications that expand and contract based on user demand Red Hat’s Open Hybrid Cloud Architecture provides a Platform as a Service (PaaS) component that allows IT organizations to recognize the full benefit of cloud economics.

09

Self-Service

Red Hat’s Open Hybrid Cloud Architecture provides a single self-service portal that allows application designers to publish services that span multiple cloud service models to catalogs for consumption. This unique capability is made possible by rich automation and workflow engines within Red Hat’s cloud management platform and open APIs within Red Hat’s datacenter virtualization, Infrastructure as a Service (IaaS), and Platform as a Service (PaaS) components. Once published to a catalog, users can deploy complex applications in an easy to use browser based interface and begin working immediately. IT organizations can leverage automation and workflow capabilities combined with capacity and utilization data to intelligently place workloads on the resources best suited based on performance, capacity, or security requirements. Finally, the Open Hybrid Cloud Architecture provides the ability for IT organizations to perform showback and chargeback across both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) platforms through a single pane of glass. This suits the IT consumers preference of utility consumption they have grown accustomed to when using public cloud providers.

10

Accelerated Application Development

The Open Hybrid Cloud Architecture allows for faster application development by providing automation at both the application and infrastructure layers to ensure that accelerated application development can be realized throughout IT organizations entire base of investments. Without a solid understanding of both the application and infrastructure layers the benefits of accelerated application development are limited to the development paradigms in a single layer. Furthermore, without support for heterogeneity within both infrastructure and application layers choice is limited. In allowing for a broad choice of applications and frameworks and a broad choice of infrastructure providers to run those applications upon IT organizations have an increased amount of choice leading to lower costs, better performance, and competitive advantages. With a unified understanding of both applications and infrastructure changes made to a service during development can be captured and integrated into existing change management systems. This combination of automation and control at all layers and across heterogeneous infrastructure and applications provides accelerated application development throughout all resources within IT organizations.

11

Rapid Environment Delivery

Delivery of environments to IT consumers and the development teams with IT operations is critical to accelerating application development. Without a holistic understanding of both the application lifecycle and the underlying infrastructure delivery of environments will be inefficient or slow. For example, if the orchestration and provisioning of environments understands only the application lifecycle concepts and lacks the understanding of underlying infrastructure then the use of infrastructure would not be optimized. Placement of applications on platforms that offer the best cost, performance, or security attributes would not be possible. Similarly, if the orchestration and provisioning of environments understands only the infrastructure concepts then it would not be able to automate the application lifecycle leading to incomplete environments. The Open Hybrid Cloud Architecture’s provisioning and orchestration of environments understands the concepts of application lifecycle management and the underlying infrastructure. This provides end users of the environment with a elevated user experience while simultaneously giving operations teams maximum efficiency for hosting applications. With a firm understanding of both applications and infrastructure the architecture allows for flexible and continuous best fit placement for applications in various deployment models. Running certain parts of an application in a Platform as a Service (PaaS) and others in virtual machines within the Infrastructure as a Service (IaaS) while still realizing the benefits, such as rapid elasticity, of the highest order cloud model of PaaS can be realized.

The Open Hybrid Cloud Architecture

Red Hat’s Open Hybrid Cloud Architecture provides the capabilities IT consumers and IT organizations want with the strategic characteristics they need. By delivering the capabilities of Self-Service, Elastic Applications and Infrastructure, Accelerated Application Development, and Rapid Environment Delivery IT organizations can meet the rising expectation of IT consumers. At the same time, the Open Hybrid Cloud Architecture meets the strategic needs of choice, agility, and openness. The architecture also allows IT organizations to leverage their existing investments and provides a evolutionary approach to the Open Hybrid Cloud Architecture.

Where to begin
Each IT organization is different, but there are some actions IT organization can take to get started on the journey towards an Open Hybrid Cloud Architecture. By understanding all it’s assets and the capacity and utilization metrics for them, IT organizations can better understand what components of the Open Hybrid Cloud Architecture will yield the most benefit. If asset and capacity and utilization metrics are well understood a plan which uses a phased approach to implement the components of the Open Hybrid Cloud Architecture can be created.

Download the OpenOffice Draw File used in the diagrams here

Building an Open Hybrid Cloud

I’m often asked to explain what an Open Hybrid Cloud is to those interested in cloud computing. Those interested in cloud computing usually includes everyone. For these situations, some high level slides on the what cloud computing is, why it’s important to build an Open Hybrid Cloud, and what the requirements of an Open Hybrid Cloud are usually enough to satisfy.

In contrast to everyone interested in cloud, there are system administrators, developers, and engineers (my background) who want to understand the finer details of how multi-tenancy, orchestration, and cloud brokering are being performed. Given my background, these some of my favorite conversations to have. As an example, many of the articles I’ve written on this blog drill down to this level of discussion.

Lately, however, I’ve observed that some individuals – who fall somewhere in between the geeks and everyone else – have a hard time understanding what is next for their IT architectures. To be clear, having an understanding of the finer points of resource control and software defined networking are really important in order to ensure you are making the right technology decisions, but it’s equally important to understand the next steps that you can take to arrive at the architecture of the future (an Open Hybrid Cloud).

With that in mind let’s explore how an Open Hybrid Cloud architecture can allow organization to evolve to greater flexibility, standardization, and automation on their choice of providers at their own pace. Keep in mind, you may see this same basic architecture proposed by other vendors, but do not be fooled – there are fundamental differences in the way problems are solved in a true Open Hybrid Cloud. You can test whether or not a cloud is truly an Open Hybrid Cloud by comparing and contrasting it against the tenants of an Open Hybrid Cloud as defined by Red Hat. I hope to share more on those differences later – let’s get started on the architecture first.

Side note – this is not meant as the only evolutionary path organizations can take on their way to an Open Hybrid Cloud architecture. There are many paths to Open Hybrid Cloud! 🙂

physicaldatacenter

In the beginning [of x86] there were purely physical architectures. These architectures were often rigid, slow to change, and under utilized. The slowness and rigidity wasn’t necessarily because physical hardware is difficult to re-purpose quickly or because you can’t achieve close to the same level of automation with physical hardware as you could with virtual machines. In fact, I’m fairly certain many public cloud providers today could argue they have no need for a hypervisor at all for their PaaS offerings. Rather, purely physical architectures were slow to change and rigid because operational processes were often neglected and the multiple hardware platforms that quickly accumulated within a single organization lacked well defined points of integration for which IT organizations could automate. The under utilization of physical architectures could be largely attributed to operating systems [on x86], which could not sufficiently serve multiple applications within a single operating system (we won’t name names, but we know who you are).

Side note – for the purposes of keeping our diagram simple, we will group the physical systems in with the virtualized systems. Also, we won’t add all the complexity that was likely added due to changing demands on IT. For example, an acquisition of company X – two teams being merged together, etc. You can assume wherever you see architecture there are multiple types, different versions, and different administrators at each level.

virtualdatacenter

Virtualized architectures provided a solution to the problems faced in physical architectures. Two areas in which virtualized architectures provided benefits are by allowing for higher utilization of physical hardware resources and greater availability for workloads. Virtualized Architectures did this by decoupling workloads from the physical resources that they were utilizing. Virtualized architectures also provided a single clean interface by which operations could request new virtual resources. It became apparent that this interface could be utilized to provide other users outside IT operations with self-service capabilities. While this new self-service capability was possible, virtualized architectures did NOT account for automation and other aspects of operational efficiency, key ingredients in providing end users with on demand access to the compute resources while still maintaining some semblance of control required by IT operations.

enterprisecloudmanagement

In order to combine the benefits of operational efficiency with the ability for end users to utilize self-service, IT organizations adopted technologies that could provide these benefits. In this case, I refer to them as Enterprise Cloud Management tools, but each vendor has their own name for them. These tools provide IT organizations the ability to provide IT as a Service to their end customers. It also provided greater strategic flexibility for IT operations in that it could decouple the self-service aspects from the underlying infrastructure. Enforcing this concept allows IT operations to ensure they can change the underlying infrastructure without impacting the end user experience. Enterprise Cloud Management coupled with virtualization also provides greater operational efficiency, automating many of the routine tasks, ensuring compliance, and dealing with the VM sprawl that often occurs when the barrier to operating environments is reduced to end users.

Datacenter virtualization has many benefits and coupled with Enterprise Cloud Management it begins to define how IT organization can deliver services to its customers with greater efficiency and flexibility. The next generation of developers, however, have begun to recognize that applications could be architected in such ways that are less constrained by physical hardware requirements as well. In the past, developers might develop applications using a relational database that required certain characteristics of hardware (or virtual hardware) to achieve a level of scale. Within new development architectures, such as noSQL for example, applications are built to scale horizontally and are designed to be stateless from the ground up. This change in development impacts greatly the requirements that developers have from IT operations. Applications developed in this new methodology are built with the assumption that the underlying operating system can be destroyed at any time and the applications must continue to function.

datacentervirtualizationandprivatecloud

For these types of applications, datacenter virtualization is overkill. This realization has led to the emergence of private cloud architectures, which leverage commodity hardware to provide [largely] stateless environments for applications. Private cloud architectures provide the same benefits as virtualized datacenter architectures at a lower cost and with the promise of re-usable services within the private cloud. With Enterprise Cloud Management firmly in place, it is much easier for IT organizations to move workloads to the architecture which best suits them at the best price. In the future, it is likely that lines between datacenter virtualization and private clouds become less distinct– eventually leading to a single architecture that can account for the benefits of both.

hybridcloud

As was previously mentioned, Enterprise Cloud Management allows IT organizations to deploy workloads to the architecture which best suits them. With that in mind, one of the lowest cost options for hosting “cloud applications” is in a public IaaS provider. This allows businesses to choose from a number of public cloud providers based on their needs. It also allows them to have capacity on demand without investing heavily in their own infrastructure should they have variable demand for workloads.

paas

Finally, IT organizations would like to continue to increase operational efficiency while simultaneously increasing the ability for its end customers to achieve their requirements without needing manual intervention from IT operations. While the “cloud applications” hosted on a private cloud remove some of the operational complexity of application development, and ultimately deployment/management, they don’t address many of the steps required to provide a running application development environment beyond the operating system. Tasks such as configuring application servers for best performance, scaling based on demand, and managing the application namespaces are still manual tasks. In order to provide further automation and squeeze even higher rates of utilization within each operating system, IT organizations can adopt a Platform as a Service (PaaS). By adopting a PaaS architecture, organizations can achieve many of the same benefits that virtualization provided for the operating system at the application layer.

This was just scratching the surface of how customers are evolving from the traditional datacenter to the Open Hybrid Cloud architecture of the future. What does Red Hat provide to enable these architectures? Not surprisingly, Red Hat has community and enterprise products for each one of these architectures. The diagram below demonstrates the enterprise products that Red Hat offers to enable these architectures.

productnames

Area                                             Community         Enterprise
Physical Architectures             Fedora                  Red Hat Enterprise Linux
Datacenter Virtualization       oVirt                      Red Hat Enterprise Virtualziation
Hybrid Cloud Management   Aeolus/Katello    CloudForms/ManageIQ EVM
Private Cloud                            OpenStack            Stay Tuned
Public Cloud                             Red Hat’s Certified Cloud Provider Program
Platform as a Service              OpenShift Origin  OpenShift Enterprise
Software based storage          Gluster                     Red Hat Storage

Areas of Caution

While I don’t have the time to explore every differentiating aspect of a truly Open Hybrid Cloud in this post, I would like to focus on two trends that IT organizations should be wary of as they design their next generation architectures.

scenario1

The first trend to be wary of is developers utilizing services that are only available in the public cloud (often a single public cloud) to develop new business functionality. This limits flexibility of deployment and increases lock-in to a particular provider. It’s ironic, because many of the same developers moved from developing applications that required specific hardware requirements to horizontally scaling and stateless architectures. You would think developers should know better. In my experience it’s not a developers concern how they deliver business value and at what cost of strategic flexibility they deliver this functionality. The cost of strategic flexibility is something that deeply concerns IT operations. It’s important to highlight that any applications developed within a public cloud that leverages that public clouds services are exclusive to that public cloud only. This may be OK with organizations, as long they believe that the public cloud they choose will forever be the leader and they never want to re-use their applications in other areas of their IT architecture.

scenario2

This is why it is imperative to provide the same level of self-service via Enterprise Cloud Management as the public cloud providers do in their native tools. It’s also important to begin developing portable services that mirror the functionality of a single public provider but are portable across multiple architectures – including private clouds and any public cloud provider that can provide Infrastructure as a Service (IaaS). A good example of this is the ability to use Gluster (Red Hat Storage) to provide a consistent storage experience between both on and off premise storage architectures as opposed to using a service that is only available in the public cloud.

scenario3

The second trend to be wary of is datacenter virtualization vendors advocating for hybrid cloud solutions that offer limited portability due to their interest in preserving the nature of proprietary hardware or software platforms within the datacenter. A good example of this trend would be a single vendor advocating replication of a single type of storage frame be performed to a single public cloud providers storage solution. This approach screams lock-in beyond that of using just the public cloud and should be avoided for the same reasons.

scenario4

Instead, IT organizations should seek to solve problems such as this through the use of portable services. These services allow for greater choice of public cloud providers while also allowing for greater choice of hardware AND software providers within the virtualized datacenter architecture.

I hope you found this information useful and I hope you visit again!