In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.
The major benefit of utilising these existing API’s with Director is that they’re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it’s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.
With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that’s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.
Continue reading “TripleO (Director) Components in Detail”
Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed.
The Red Hat recommended architecture for fully operational OpenStack clouds include predefined and configurable roles that are robust, resilient, ready to scale, and capable of integrating with a wide variety of existing 3rd party technologies. We do this with by leveraging the logic embedded in Red Hat OpenStack Platform Director (based on the upstream TripleO project).
With Director, you’ll use OpenStack language to create a truly Software Defined Data Center. You’ll use Ironic drivers for your initial bootstrapping of servers, and Neutron networking to define management IPs and provisioning networks. You will use Heat to document the setup of your server room, and Nova to monitor the status of your control nodes. Because Director comes with pre-defined scenarios optimized from our 20 years of Linux know-how and best practices, you will also learn how OpenStack is configured out of the box for scalability, performance, and resilience.
Why do kids in primary school learn multiplication tables when we all have calculators? Why should you learn how to use OpenStack in order to install OpenStack? Mastering these pieces is a good thing for your IT department and your own career, because they provide a solid foundation for your organization’s path to a Software Defined Data Center. Eventually, you’ll have all your Data Center configuration in text files stored on a Git repository or on a USB drive that you can easily replicate within another data center.
In a series of coming blog posts, we’ll explain how Director has been built to accommodate the business requirements and the challenges of deploying OpenStack and its long-term management. If you are really impatient, remember that we publish all of our documentation in the Red Hat OpenStack Platform documentation portal (link to version 8).
Continue reading “Introduction to Red Hat OpenStack Platform Director”
Written by Jiri Benc, Senior Software Engineer, Networking Services, Linux kernel, and Open vSwitch
By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.
It goes without question that in the modern world, we need firewalling to protect machines from hostile environments. Any non-trivial firewalling requires you keep track of the connections to and from the machine. This is called “stateful firewalling”. Indeed, even such basic rule as “don’t allow machines from the Internet to connect to the machine while allowing the machine itself to connect to servers on the Internet” requires stateful firewall. This applies also to virtual machines. And obviously, any serious cloud platform needs such protection.
Continue reading “How connection tracking in Open vSwitch helps OpenStack performance”