Category Archives: Architecture

Deploying OpenShift with CloudForms Presentation

Slides from my talk on Deploying OpenShift with CloudForms can be downloaded here.

Red Hat’s Open Hybrid Cloud Architecture

IT consumers traditionally satisfied their requirements for services by utilizing their internal IT departments. The type of service consumed has evolved over time. Most recently consumption is dominated by the service of virtual machines. More advanced internal IT departments may include even more service oriented consumption to IT consumers in the form of standardized application stacks running on top of virtual machines. The process of procuring such services could take days, weeks, or even months for an internal IT department. Length of procurement can be attributed to the complex architectures as well as business requirements, such as governance and compliance, that are required to be followed by IT organizations.


In the search to innovate faster, IT consumers have begun to recognize the value of public clouds to more quickly provide the services they need. Whether it is Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), IT consumers began to utilize these public cloud providers. IT consumers enjoyed increased agility and a consumption model that allowed them to utilize computing as a utility. While using public cloud providers is appropriate for certain workloads, IT organizations have struggled to maintain compliance, governance, and control over businesses critical assets in the public cloud. At the same time, IT consumers expectations of what IT organization should provide have dramatically increased.


The increased expectations of the IT consumer are being transferred to the IT organization in the form of increased demands. Increased demand for self-service, elastic infrastructure and applications, the ability to more rapidly deliver environments, and accelerated application development are some of the specific demands being driven by the experience the IT consumer has had while using the public cloud. The IT consumer is losing patience with IT organizations and the threat of shadow IT organizations is real. IT organizations would like to deliver these capabilities to the IT consumer and would like to maintain their operational practices over the delivery of such capabilities. IT organizations also recognize that the shift to a next generation IT architecture is an opportunity to make strategic decisions to both simplify their IT architecture and address concerns that have been plaguing them in the architectures of the past. These strategic decisions include embracing an architecture that provides choice, agility, openness, and leverages existing investments.


Choice is important to Operations and Developers
Operations teams need the ability to deploy workloads on a choice of infrastructure providers with the ability to seamlessly manage workloads once deployed. Without the ability to easily deploy and move workloads from one infrastructure provider to another the operations teams are stuck using a single infrastructure provider. Being locked in to a single infrastructure provider prohibits operations teams from leveraging innovation from other providers or choosing the right provider for the right workload. Development teams also require choice. A broad choice of languages and frameworks and support for polyglot, poly-framework applications is an expectation of development teams because each language and framework is providing important innovations that can be assembled to solve complex business problems efficiently in a way that a single language alone cannot solve.

Agility and Openness are critical to maintaining relevance with the IT consumer

Agility will allow IT organizations to remain relevant with the IT consumer. By being able to quickly provide new languages, frameworks, and solutions to complex problems IT operations can become a strategic partner to the IT consumer instead of being viewed as simply an expense. In choosing a next generation IT architecture that is based on openness, IT organizations can ensure that future innovation can be more easily adopted, and ensure that future investments are more easily consumable then today’s architectures.
Leverage existing investments alongside a Next Generation Architecture

IT organizations have invested heavily in the current IT architectures and the next generation IT architecture needs to leverage those existing investments. Meanwhile, IT consumers are requesting specific capabilities from IT organizations as a result of their experience with public cloud providers that are not available in current IT architectures.

Red Hat’s Open Hybrid Cloud Architecture provides these capabilities today while balancing the strategic requirements IT organizations need in their next generation IT architecture. It all starts with a federated, highly scalable, and extensible operational management platform for cloud which provides discovery, capacity planning, reporting, audit and compliance, analytics, monitoring, orchestration, policy, and chargeback functionality. These capabilities are extended throughout all aspects of the Open Hybrid Cloud Architecture to provide a unified approach to management through a single pane of glass.


Within the infrastructure layer existing investments in physical systems and datacenter virtualization platforms can be unified with the next generation IT architectures of IaaS private and public clouds. Existing investments in application architectures can be managed in their existing environments through a single pane of glass which also provides insight into next generation IT architectures of private and public PaaS platforms.


The Open Hybrid Cloud Architecture’s operational management platform goes beyond a remedial understanding of deploying workloads to providers. The operational management platform is extended to provide deep levels of integration with automation frameworks in both the infrastructure and application layers. By leveraging these automation frameworks The Open Hybrid Cloud Architecture allows for new levels of flexibility and efficiency in workload placement and analysis. The approach of deep integration of loosely coupled systems forms the basis by which IT organizations can provide the IT consumer with the capabilities they have come to expect through their use of public clouds without building a cloud silo.


Elastic Infrastructure

Red Hat’s Open Hybrid Cloud Architecture provides elastic infrastructure via it’s Infrastructure as a Service (IaaS) component and related Infrastructure Automation capabilities. The Open Hybrid Cloud Architecture not only provides elastic infrastructure via IaaS, but also provides consistent management across a broad range of other infrastructure including physical systems, datacenter virtualization, and IaaS public clouds. This allows IT organizations to leverage the benefits of cloud computing across their existing investments and provides a single pane of glass view of their resources. This comprehensive view of all computing resources provides the information IT organizations need to optimize workload placement. For example, with capacity and utilization data from workloads running in datacenter virtualization platforms IT organizations can determine which workloads are the best targets for moving to IaaS clouds, both private and public. Without a comprehensive view of all computing resources elastic infrastructure based on IaaS alone is yet another silo of management for IT organizations.

Elastic Applications
The benefits of cloud economics cannot be realized through elastic infrastructure alone, but require applications and application platforms. Next generation applications must be designed with the core tenants of cloud computing in mind in order to take advantage of underlying elastic infrastructure. By allowing users to develop elastic applications that expand and contract based on user demand Red Hat’s Open Hybrid Cloud Architecture provides a Platform as a Service (PaaS) component that allows IT organizations to recognize the full benefit of cloud economics.



Red Hat’s Open Hybrid Cloud Architecture provides a single self-service portal that allows application designers to publish services that span multiple cloud service models to catalogs for consumption. This unique capability is made possible by rich automation and workflow engines within Red Hat’s cloud management platform and open APIs within Red Hat’s datacenter virtualization, Infrastructure as a Service (IaaS), and Platform as a Service (PaaS) components. Once published to a catalog, users can deploy complex applications in an easy to use browser based interface and begin working immediately. IT organizations can leverage automation and workflow capabilities combined with capacity and utilization data to intelligently place workloads on the resources best suited based on performance, capacity, or security requirements. Finally, the Open Hybrid Cloud Architecture provides the ability for IT organizations to perform showback and chargeback across both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) platforms through a single pane of glass. This suits the IT consumers preference of utility consumption they have grown accustomed to when using public cloud providers.


Accelerated Application Development

The Open Hybrid Cloud Architecture allows for faster application development by providing automation at both the application and infrastructure layers to ensure that accelerated application development can be realized throughout IT organizations entire base of investments. Without a solid understanding of both the application and infrastructure layers the benefits of accelerated application development are limited to the development paradigms in a single layer. Furthermore, without support for heterogeneity within both infrastructure and application layers choice is limited. In allowing for a broad choice of applications and frameworks and a broad choice of infrastructure providers to run those applications upon IT organizations have an increased amount of choice leading to lower costs, better performance, and competitive advantages. With a unified understanding of both applications and infrastructure changes made to a service during development can be captured and integrated into existing change management systems. This combination of automation and control at all layers and across heterogeneous infrastructure and applications provides accelerated application development throughout all resources within IT organizations.


Rapid Environment Delivery

Delivery of environments to IT consumers and the development teams with IT operations is critical to accelerating application development. Without a holistic understanding of both the application lifecycle and the underlying infrastructure delivery of environments will be inefficient or slow. For example, if the orchestration and provisioning of environments understands only the application lifecycle concepts and lacks the understanding of underlying infrastructure then the use of infrastructure would not be optimized. Placement of applications on platforms that offer the best cost, performance, or security attributes would not be possible. Similarly, if the orchestration and provisioning of environments understands only the infrastructure concepts then it would not be able to automate the application lifecycle leading to incomplete environments. The Open Hybrid Cloud Architecture’s provisioning and orchestration of environments understands the concepts of application lifecycle management and the underlying infrastructure. This provides end users of the environment with a elevated user experience while simultaneously giving operations teams maximum efficiency for hosting applications. With a firm understanding of both applications and infrastructure the architecture allows for flexible and continuous best fit placement for applications in various deployment models. Running certain parts of an application in a Platform as a Service (PaaS) and others in virtual machines within the Infrastructure as a Service (IaaS) while still realizing the benefits, such as rapid elasticity, of the highest order cloud model of PaaS can be realized.

The Open Hybrid Cloud Architecture

Red Hat’s Open Hybrid Cloud Architecture provides the capabilities IT consumers and IT organizations want with the strategic characteristics they need. By delivering the capabilities of Self-Service, Elastic Applications and Infrastructure, Accelerated Application Development, and Rapid Environment Delivery IT organizations can meet the rising expectation of IT consumers. At the same time, the Open Hybrid Cloud Architecture meets the strategic needs of choice, agility, and openness. The architecture also allows IT organizations to leverage their existing investments and provides a evolutionary approach to the Open Hybrid Cloud Architecture.

Where to begin
Each IT organization is different, but there are some actions IT organization can take to get started on the journey towards an Open Hybrid Cloud Architecture. By understanding all it’s assets and the capacity and utilization metrics for them, IT organizations can better understand what components of the Open Hybrid Cloud Architecture will yield the most benefit. If asset and capacity and utilization metrics are well understood a plan which uses a phased approach to implement the components of the Open Hybrid Cloud Architecture can be created.

Download the OpenOffice Draw File used in the diagrams here

Building an Open Hybrid Cloud

I’m often asked to explain what an Open Hybrid Cloud is to those interested in cloud computing. Those interested in cloud computing usually includes everyone. For these situations, some high level slides on the what cloud computing is, why it’s important to build an Open Hybrid Cloud, and what the requirements of an Open Hybrid Cloud are usually enough to satisfy.

In contrast to everyone interested in cloud, there are system administrators, developers, and engineers (my background) who want to understand the finer details of how multi-tenancy, orchestration, and cloud brokering are being performed. Given my background, these some of my favorite conversations to have. As an example, many of the articles I’ve written on this blog drill down to this level of discussion.

Lately, however, I’ve observed that some individuals – who fall somewhere in between the geeks and everyone else – have a hard time understanding what is next for their IT architectures. To be clear, having an understanding of the finer points of resource control and software defined networking are really important in order to ensure you are making the right technology decisions, but it’s equally important to understand the next steps that you can take to arrive at the architecture of the future (an Open Hybrid Cloud).

With that in mind let’s explore how an Open Hybrid Cloud architecture can allow organization to evolve to greater flexibility, standardization, and automation on their choice of providers at their own pace. Keep in mind, you may see this same basic architecture proposed by other vendors, but do not be fooled – there are fundamental differences in the way problems are solved in a true Open Hybrid Cloud. You can test whether or not a cloud is truly an Open Hybrid Cloud by comparing and contrasting it against the tenants of an Open Hybrid Cloud as defined by Red Hat. I hope to share more on those differences later – let’s get started on the architecture first.

Side note – this is not meant as the only evolutionary path organizations can take on their way to an Open Hybrid Cloud architecture. There are many paths to Open Hybrid Cloud! 🙂


In the beginning [of x86] there were purely physical architectures. These architectures were often rigid, slow to change, and under utilized. The slowness and rigidity wasn’t necessarily because physical hardware is difficult to re-purpose quickly or because you can’t achieve close to the same level of automation with physical hardware as you could with virtual machines. In fact, I’m fairly certain many public cloud providers today could argue they have no need for a hypervisor at all for their PaaS offerings. Rather, purely physical architectures were slow to change and rigid because operational processes were often neglected and the multiple hardware platforms that quickly accumulated within a single organization lacked well defined points of integration for which IT organizations could automate. The under utilization of physical architectures could be largely attributed to operating systems [on x86], which could not sufficiently serve multiple applications within a single operating system (we won’t name names, but we know who you are).

Side note – for the purposes of keeping our diagram simple, we will group the physical systems in with the virtualized systems. Also, we won’t add all the complexity that was likely added due to changing demands on IT. For example, an acquisition of company X – two teams being merged together, etc. You can assume wherever you see architecture there are multiple types, different versions, and different administrators at each level.


Virtualized architectures provided a solution to the problems faced in physical architectures. Two areas in which virtualized architectures provided benefits are by allowing for higher utilization of physical hardware resources and greater availability for workloads. Virtualized Architectures did this by decoupling workloads from the physical resources that they were utilizing. Virtualized architectures also provided a single clean interface by which operations could request new virtual resources. It became apparent that this interface could be utilized to provide other users outside IT operations with self-service capabilities. While this new self-service capability was possible, virtualized architectures did NOT account for automation and other aspects of operational efficiency, key ingredients in providing end users with on demand access to the compute resources while still maintaining some semblance of control required by IT operations.


In order to combine the benefits of operational efficiency with the ability for end users to utilize self-service, IT organizations adopted technologies that could provide these benefits. In this case, I refer to them as Enterprise Cloud Management tools, but each vendor has their own name for them. These tools provide IT organizations the ability to provide IT as a Service to their end customers. It also provided greater strategic flexibility for IT operations in that it could decouple the self-service aspects from the underlying infrastructure. Enforcing this concept allows IT operations to ensure they can change the underlying infrastructure without impacting the end user experience. Enterprise Cloud Management coupled with virtualization also provides greater operational efficiency, automating many of the routine tasks, ensuring compliance, and dealing with the VM sprawl that often occurs when the barrier to operating environments is reduced to end users.

Datacenter virtualization has many benefits and coupled with Enterprise Cloud Management it begins to define how IT organization can deliver services to its customers with greater efficiency and flexibility. The next generation of developers, however, have begun to recognize that applications could be architected in such ways that are less constrained by physical hardware requirements as well. In the past, developers might develop applications using a relational database that required certain characteristics of hardware (or virtual hardware) to achieve a level of scale. Within new development architectures, such as noSQL for example, applications are built to scale horizontally and are designed to be stateless from the ground up. This change in development impacts greatly the requirements that developers have from IT operations. Applications developed in this new methodology are built with the assumption that the underlying operating system can be destroyed at any time and the applications must continue to function.


For these types of applications, datacenter virtualization is overkill. This realization has led to the emergence of private cloud architectures, which leverage commodity hardware to provide [largely] stateless environments for applications. Private cloud architectures provide the same benefits as virtualized datacenter architectures at a lower cost and with the promise of re-usable services within the private cloud. With Enterprise Cloud Management firmly in place, it is much easier for IT organizations to move workloads to the architecture which best suits them at the best price. In the future, it is likely that lines between datacenter virtualization and private clouds become less distinct– eventually leading to a single architecture that can account for the benefits of both.


As was previously mentioned, Enterprise Cloud Management allows IT organizations to deploy workloads to the architecture which best suits them. With that in mind, one of the lowest cost options for hosting “cloud applications” is in a public IaaS provider. This allows businesses to choose from a number of public cloud providers based on their needs. It also allows them to have capacity on demand without investing heavily in their own infrastructure should they have variable demand for workloads.


Finally, IT organizations would like to continue to increase operational efficiency while simultaneously increasing the ability for its end customers to achieve their requirements without needing manual intervention from IT operations. While the “cloud applications” hosted on a private cloud remove some of the operational complexity of application development, and ultimately deployment/management, they don’t address many of the steps required to provide a running application development environment beyond the operating system. Tasks such as configuring application servers for best performance, scaling based on demand, and managing the application namespaces are still manual tasks. In order to provide further automation and squeeze even higher rates of utilization within each operating system, IT organizations can adopt a Platform as a Service (PaaS). By adopting a PaaS architecture, organizations can achieve many of the same benefits that virtualization provided for the operating system at the application layer.

This was just scratching the surface of how customers are evolving from the traditional datacenter to the Open Hybrid Cloud architecture of the future. What does Red Hat provide to enable these architectures? Not surprisingly, Red Hat has community and enterprise products for each one of these architectures. The diagram below demonstrates the enterprise products that Red Hat offers to enable these architectures.


Area                                             Community         Enterprise
Physical Architectures             Fedora                  Red Hat Enterprise Linux
Datacenter Virtualization       oVirt                      Red Hat Enterprise Virtualziation
Hybrid Cloud Management   Aeolus/Katello    CloudForms/ManageIQ EVM
Private Cloud                            OpenStack            Stay Tuned
Public Cloud                             Red Hat’s Certified Cloud Provider Program
Platform as a Service              OpenShift Origin  OpenShift Enterprise
Software based storage          Gluster                     Red Hat Storage

Areas of Caution

While I don’t have the time to explore every differentiating aspect of a truly Open Hybrid Cloud in this post, I would like to focus on two trends that IT organizations should be wary of as they design their next generation architectures.


The first trend to be wary of is developers utilizing services that are only available in the public cloud (often a single public cloud) to develop new business functionality. This limits flexibility of deployment and increases lock-in to a particular provider. It’s ironic, because many of the same developers moved from developing applications that required specific hardware requirements to horizontally scaling and stateless architectures. You would think developers should know better. In my experience it’s not a developers concern how they deliver business value and at what cost of strategic flexibility they deliver this functionality. The cost of strategic flexibility is something that deeply concerns IT operations. It’s important to highlight that any applications developed within a public cloud that leverages that public clouds services are exclusive to that public cloud only. This may be OK with organizations, as long they believe that the public cloud they choose will forever be the leader and they never want to re-use their applications in other areas of their IT architecture.


This is why it is imperative to provide the same level of self-service via Enterprise Cloud Management as the public cloud providers do in their native tools. It’s also important to begin developing portable services that mirror the functionality of a single public provider but are portable across multiple architectures – including private clouds and any public cloud provider that can provide Infrastructure as a Service (IaaS). A good example of this is the ability to use Gluster (Red Hat Storage) to provide a consistent storage experience between both on and off premise storage architectures as opposed to using a service that is only available in the public cloud.


The second trend to be wary of is datacenter virtualization vendors advocating for hybrid cloud solutions that offer limited portability due to their interest in preserving the nature of proprietary hardware or software platforms within the datacenter. A good example of this trend would be a single vendor advocating replication of a single type of storage frame be performed to a single public cloud providers storage solution. This approach screams lock-in beyond that of using just the public cloud and should be avoided for the same reasons.


Instead, IT organizations should seek to solve problems such as this through the use of portable services. These services allow for greater choice of public cloud providers while also allowing for greater choice of hardware AND software providers within the virtualized datacenter architecture.

I hope you found this information useful and I hope you visit again!

Elasticity in the Open Hybrid Cloud

Several months ago in my post on Open Hybrid PaaS I mentioned that OpenShift, Red Hat’s PaaS can autoscale gears to provide elasticity to applications. OpenShift scales gears on something it calls a node, which is essentially a virtual machine with OpenShift installed on it. One thing OpenShift doesn’t focus on is scaling the underlying nodes. This is understandable, because a PaaS doesn’t necessarily understand the underlying infrastructure, nor does it necessarily want to.

It’s important that nodes are able to be autoscaled in a PaaS. I’d take this one step further and submit that it’s important that operating systems are able to be autoscaled at the IaaS layer. This is partly because many PaaS solutions will be built atop an Operating System. Even more importantly, Red Hat is all about enabling an Open Hybrid Cloud and one of the benefits Open Hybrid Cloud wants to deliver is cloud efficiency across an organizations entire datacenter and not just a part of it. If you need to statically deploy Operating Systems you fail to achieve the efficiency of cloud across all of your resources. You also can’t re-purpose or shift physical resources if you can’t autoscale operating systems.

Requirements for a Project

The background above presents the basis for some requirements for an operating system auto-scaling project.

  1. It needs to support deploying across multiple virtualization technologies. Whether a virtualization provider, IaaS private cloud, or public cloud.
  2. It needs to support deploying to physical hardware.
  3. It cannot be tied to any single vendor, PaaS, or application.
  4. It needs to be able to configure the operating systems deployed upon launch for handing over to an application.
  5. It should be licensed to promote reuse and contribution.


Here is an idea for a project that could solve such a problem, which I call “The Governor”.

Example Workflow

Example Workflow

To explain the workflow:

  1. The application realizes it needs more resources. Monitoring of the application to determine whether it needs more resources is not within the scope of The Governor. This is by design as there are countless applications and each one of them has different requirements for scalability and elasticity. For this reason, The Governor lets the applications make the determination for when to request more resources. When the application makes this determination it makes a call to The Governor’s API.
  2. The call to the API involves the use of a certificate for authentication. This ensures that only applications that have been registered in the registry can interact with The Governor to request resources. If the certificate based authentication works (the application is registered in The Governor) then the workflow proceeds. If not, the applications request is rejected.
  3. Upon receiving an authenticated request for more resources the certificate (which is unique) is run through the rules engine to determine the rules the application must abide by when scaling. This would include decision points such as which providers can the application scale on, how many instances can the application consume, etc. If the scaling is not permitted by the rules (maximum number of instances is reached, etc) then the response is sent back to the application informing it the request has been declined.
  4. Once the rules engine determines the appropriate action it calls the orchestrator which initiates the action.
  5. The orchestrator calls either the cloud broker, which can launch instances to a variety of virtualization managers and cloud providers, either private or public, or a metal as a service (MaaS), which can provision an operating system on bare metal.
  6. and 7.  The cloud broker or MaaS launch or provision the appropriate operating system and configure it per the application’s requirements.

Future Details

There are more details which need to be further developed:

  • How certificates are generated and applications are registered.
  • How application registration details, such as the images that need to be launched and the configurations that need to be implemented on them are expressed.
  • How the configured instance is handed back to the application for use by the application.

Where to grow a community?

It matters where this project will ultimately live and grow. A project such as this one would need broad community support and a vibrant community in order to gain adoption and support to become a standard means of describing elasticity at the infrastructure layer. For this reason a community with a large number of active participants and friendly licensing which promotes contribution should be it’s location.

Tagged , , , , ,