TripleO (Director) Components in Detail

In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.

The major benefit of utilising these existing API’s with Director is that they’re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it’s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.

With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that’s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.

undercloud vs overcloud

Introduction to Red Hat OpenStack Platform Director

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed. general

The Red Hat recommended architecture for fully operational OpenStack clouds include predefined and configurable roles that are robust, resilient, ready to scale, and capable of integrating with a wide variety of existing 3rd party technologies. We do this with by leveraging the logic embedded in Red Hat OpenStack Platform Director (based on the upstream TripleO project).

With Director, you’ll use OpenStack language to create a truly Software Defined Data Center. You’ll use Ironic drivers for your initial bootstrapping of servers, and Neutron networking to define management IPs and provisioning networks. You will use Heat to document the setup of your server room, and Nova to monitor the status of your control nodes. Because Director comes with pre-defined scenarios optimized from our 20 years of Linux know-how and best practices, you will also learn how OpenStack is configured out of the box for scalability, performance, and resilience.

Why do kids in primary school learn multiplication tables when we all have calculators? Why should you learn how to use OpenStack in order to install OpenStack? Mastering these pieces is a good thing for your IT department and your own career, because they provide a solid foundation for your organization’s path to a Software Defined Data Center. Eventually, you’ll have all your Data Center configuration in text files stored on a Git repository or on a USB drive that you can easily replicate within another data center.

In a series of coming blog posts, we’ll explain how Director has been built to accommodate the business requirements and the challenges of deploying OpenStack and its long-term management. If you are really impatient, remember that we publish all of our documentation in the Red Hat OpenStack Platform documentation portal (link to version 8).

How connection tracking in Open vSwitch helps OpenStack performance

Written by Jiri Benc,  Senior Software Engineer, Networking Services, Linux kernel, and Open vSwitch

 

 

By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.

Introduction

It goes without question that in the modern world, we need firewalling to protect machines from hostile environments. Any non-trivial firewalling requires you keep track of the connections to and from the machine. This is called “stateful firewalling”. Indeed, even such basic rule as “don’t allow machines from the Internet to connect to the machine while allowing the machine itself to connect to servers on the Internet” requires stateful firewall. This applies also to virtual machines. And obviously, any serious cloud platform needs such protection.

Who is Testing Your Cloud?

Co-Authored with Dan Sheppard, Product Manager, Rackspace

 

With test driven development, continuous integration/continuous deployment and devops practices now the norm, most organizations understand the importance of testing their applications.

But what about the cloud those applications are going to live on? Too many companies miss this critical step, leading to gaps in their operations, which can lead to production issues, API outages, inability to upgrade, problems when trying to upgrade and general instability of the cloud.

It all begs the question: “Do you even test?”

At Rackspace, our industry leading support teams use a proactive approach to operations, and that begins with detailed and comprehensive testing, so that not only your applications but your cloud is ready for your production workload.

Critical Collaboration

For Rackspace Private Cloud Powered by Red Hat, we collaborate closely with Red Hat; we test the upstream OpenStack code as well as the open sourced projects we leverage for our deployment, such as Ceph and Red Hat OpenStack Platform Director. This is done in a variety of ways, like sharing test cases upstream with the community via Tempest, creating and tracking bugs, and creating bug fixes upstream.

OpenStack Summit Austin: Day 4

 

Hello again from Austin, Texas where the fourth day of the main OpenStack Summit has come to a close. While there are quite a few working sessions and contributor meet-ups on Friday, Thursday marks the last official day of the main summit event. The exhibition hall closed its doors around lunch time, and the last of the vendor sessions occurred later in the afternoon. As the day concluded, many attendees were already discussing travel plans for OpenStack Summit Barcelona in October!

Before we get ahead of ourselves however, day 4 was still jam-packed with a busy agenda. Like the first three days of the event, Red Hat speakers led quite a few interesting–and well attended–sessions.

OpenStack Summit Austin: Day 3

 

Hello again from Austin, Texas where the third day of OpenStack Summit has come to a close. As with the first two days of the event, there was plenty of news, interesting sessions, great discussions on the showfloor, and more. All would likely agree that the 13th OpenStack Summit has been a Texas-sized success so far!

Similar to day 1 and day 2 of the event, Red Hat had several exciting announcements to pass along. The first press release to hit the wire detailed additional customer traction with Red Hat OpenStack Platform. Yesterday, it was announced that Verizon, NASA’s Jet Propulsion Laboratory (JPL), and Cambridge University had all selected Red Hat OpenStack Platform as the backbone of their cloud initiatives. Today, we shared the news that several large organizations across Europe, including Fastweb, Paddy Power Betfair, and Produban, have deployed the technology as well and are experiencing great results for their businesses.  

Culture and technology can drive the future of OpenStack

“OpenStack in the future is whatever we expand it to”, said Red Hat Chief Technologist, Chris Wright during his keynote at the OpenStack Summit in Austin. After watching several keynotes including those from Gartner and AT&T, I attended other sessions during the course of the day culminating in a session by Lauren E Nelson, Senior Analyst at Forrester Research. Wright’s statement made me wonder about what lies in store for OpenStack and where would the OpenStack Community — the “we” that Wright referred to — take it to in the future. Several sessions in the Analyst track called out the factors that explain the increased adoption of OpenStack as well as the technological challenges encountered. But, Nelson’s session brought it all home for me — especially in the last slide of her presentation which is a call to action to the enterprises at large to take key steps entailing a cultural shift that would ease the adoption of OpenStack and the principles entailed. Live from the OpenStack Summit at the crossroads of Culture and Technology, let me explain how this intersection can take OpenStack to a new Frontier.

Red Hat sees great potential for technological advances for OpenStack. NASA’s Jet Propulsion Laboratory (JPL) has built an OpenStack based private cloud, saving significant time and resources spent on datacenters by modernizing its on-premise storage and server capacity, giving them the ability to support hundreds of JPL mission scientists and engineers. Red Hat has positioned OpenStack to be taken to a new frontier.

But it is not all technology.  

Culture matters — a message that came through in Nelson’s session.

OpenStack Summit Austin: Day 2

Hello again from Austin, Texas where the second busy day of OpenStack Summit has come to a close. Not surprisingly, there was plenty of news, interesting sessions, great discussions on the showfloor, and more.

Starting with some announcements, the University of Cambridge, one of the world’s oldest and most prestigious academic institutions, has announced they selected Red Hat to support its OpenStack-based high performance computing (HPC) initiative. In addition to deploying Red Hat OpenStack Platform for its HPC-as-a-Service offering, the University of Cambridge also plans to collaborate with Red Hat to bring HPC capabilities to the upstream OpenStack community. To keep the research institution at the forefront of large scale big-data science, the university turned to its longtime partners Dell and Intel to help it create one of the world’s most energy efficient datacenters. Initially, they deployed OpenStack on a community-supported Linux during the proof-of-concept phase, but found that they needed a more reliable, integrated and supported OpenStack platform for production deployment, leading them to Red Hat OpenStack Platform.

OpenStack Summit Austin: Day 1

We’re live from Austin, Texas, where the 13th semi-annual OpenStack Summit is officially underway! This event has come a long way from its very first gathering six years ago, where 75 people gathered to learn about OpenStack in its infancy. That’s a sharp contrast with the 7,000+ people in attendance here, in what marks Austin’s second OpenStack Summit, returning to where it all started!

The event kicked off in the morning with Jonathan Bryce, the Executive Director of the OpenStack Foundation welcoming the crowd to the largest OpenStack Summit to date! Shortly after, Red Hat’s chief technologist, Chris Wright, gave a great keynote presentation, discussing the overall success and impact OpenStack is having on real businesses and their bottom line. Mixed in there, Chris Emmons, director of network infrastructure at Verizon joined Chris Wright on stage for a quick summary of Verizon’s own success with OpenStack for network functions virtualization. Rounding out the keynotes were the Foundation’s Super User awards, with AT&T taking the winning spot.

Meet Red Hat OpenStack Platform 8

Last week we marked the general availability of our Red Hat OpenStack Platform 8 release, the latest version of Red Hat’s highly scalable IaaS platform based on the OpenStack community “Liberty” release. A co-engineered solution that integrates the proven foundation of Red Hat Enterprise Linux with Red Hat’s OpenStack technology to form a production-ready cloud platform, Red Hat OpenStack Platform is becoming a gold standard for large production OpenStack deployments. Hundreds of global production deployments and even more proof-of-concepts are underway, in the information, telecommunications, financial sectors, and large enterprises in general. Red Hat OpenStack Platform also benefits from a strong ecosystem of industry leaders for transformative network functions virtualization (NFV), software-defined networking (SDN), and more.

From Community Innovation to Enterprise Production

The path for delivering a production-ready cloud platform, starts in the open source communities that can typically innovate far more effectively than traditional R&D labs. At Red Hat we bring customers, partners, and developers into communities of purpose to solve shared problems together. Red Hat also contributes a lot of code to the OpenStack project to help drive more community development that generally results in a higher feature velocity that enterprise customers need, with a faster time to market compared to proprietary software. When useful OpenStack technology emerges, we test it, harden it, and make it more secure and reliable.