Using OpenStack: Building a Private Cloud with Managed Service Providers

Since 2011, when OpenStack was first released to the community, the following and momentum behind it has been amazing. In fact, it quickly became one of the fastest growing open source projects in the history of open source. Now, with nearly 700 community sponsors, over 600 different modules, and over 50,000 lines of code contributed, OpenStack has become the default platform of choice for much of the private and public cloud infrastructure.

This kind of growth doesn’t happen by chance. It’s because businesses and organizations alike have experienced *real* benefits, whether it be creating greater efficiency, faster time to market, automated infrastructure management, or simply saving them money, just to name a few.

Continue reading “Using OpenStack: Building a Private Cloud with Managed Service Providers”

Red Hat OpenStack Platform and Tesora Database-as-a-Service Platform: What’s New

As OpenStack users build or migrate more applications and services for private cloud deployment, users are expanding their plans for how these deployments will be serviced by non-core, emerging components. Based on the April 2016 OpenStack User Survey (see page 35), Trove is among the top “as a service” non-core components that OpenStack users are deploying or plan to deploy on top of the core components. This comes as no surprise as every application requires a database and Trove provides OpenStack with an integrated Database-as-a-Service option that works smoothly with the core OpenStack services.

Recently, Red Hat and Tesora jointly announced that we have collaborated to certify Tesora Database as a Service (“DBaaS”) Platform on the Red Hat OpenStack Platform. When we at Red Hat announced our strategic decision to focus our development and contribution efforts on the core OpenStack services, we did so with confidence, due in large part to our expanded relationship with Tesora. Tesora is a recognized thought leader and the top contributor to upstream OpenStack Trove. They understand the needs of the Trove community, but more importantly they have a reputation for understanding, and focusing, on the needs of the those developing and supporting applications running in a heterogeneous database environment. Adding Tesora DBaaS Platform as a certified workload on top of Red Hat OpenStack Platform addresses our customer requirements and provides an immediate, production-ready DBaaS option that can be deployed within their current Red Hat OpenStack Platform 8 and higher environments.

What’s New for Red Hat OpenStack Platform Users?  

Continue reading “Red Hat OpenStack Platform and Tesora Database-as-a-Service Platform: What’s New”

Red Hat Confirms Over 40+ Accepted Sessions at OpenStack Summit Barcelona

This Fall’s 2016 OpenStack Summit in Barcelona, Spain is gearing up to be a fulfilling event. After some challenging issues with the voting system (which prevented direct URLs to each session), the Foundation has posted the final session agenda detailing the entire week’s schedule of events. Once again, I am thrilled to see the voting results of the greater community with Red Hat sharing over 40 sessions of technology overview and deep dive’s around OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much more. 

As a Premiere sponsor this Fall, Red Hat also has a full day breakout room, where we plan to share additional product and strategy sessions. To learn more about Red Hat’s general accepted sessions, have a look at the details below. We’ll add the agenda details of our breakout soon! Also, be sure to visit us at our Marketplace booth to meet the team and check out one of our live demonstrations. The Marketplace kicks off on Monday evening during the booth crawl, 5:00 – 7:00pm. Finally, we’ll have several Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.

And in case you haven’t registered yet, visit our landing page for a discounted registration code to help get you to the event. We look forward to seeing you all again in Spain this October!

For more details on each session, click on the title below:

Continue reading “Red Hat Confirms Over 40+ Accepted Sessions at OpenStack Summit Barcelona”

Thoughts on Red Hat OpenStack Platform and certification of Tesora Database as a Service Platform

When I think about open source software, Red Hat is first name that comes to mind. At Tesora, we’ve been working to make our Database as a Service Platform available to Red Hat OpenStack Platform users, and now it is a Red Hat certified solution. Officially collaborating with Red Hat in the context of OpenStack, one of the fastest growing open source projects ever, is a tremendous opportunity.

This week, we announced that Red Hat has certified the Tesora Database as a Service (DBaaS) Platform on Red Hat OpenStack Platform. Mutual customers can operate database as a service with 15 different database types knowing that they have been extensively tested in the Red Hat environment. They also have the confidence of knowing that their database software is running on Red Hat Enterprise Linux (RHEL) in an environment that is supported by Red Hat.

Continue reading “Thoughts on Red Hat OpenStack Platform and certification of Tesora Database as a Service Platform”

TripleO (Director) Components in Detail

In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.

The major benefit of utilising these existing API’s with Director is that they’re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it’s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.

With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that’s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.

undercloud vs overcloud

Continue reading “TripleO (Director) Components in Detail”

Introduction to Red Hat OpenStack Platform Director

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed. general

The Red Hat recommended architecture for fully operational OpenStack clouds include predefined and configurable roles that are robust, resilient, ready to scale, and capable of integrating with a wide variety of existing 3rd party technologies. We do this with by leveraging the logic embedded in Red Hat OpenStack Platform Director (based on the upstream TripleO project).

With Director, you’ll use OpenStack language to create a truly Software Defined Data Center. You’ll use Ironic drivers for your initial bootstrapping of servers, and Neutron networking to define management IPs and provisioning networks. You will use Heat to document the setup of your server room, and Nova to monitor the status of your control nodes. Because Director comes with pre-defined scenarios optimized from our 20 years of Linux know-how and best practices, you will also learn how OpenStack is configured out of the box for scalability, performance, and resilience.

Why do kids in primary school learn multiplication tables when we all have calculators? Why should you learn how to use OpenStack in order to install OpenStack? Mastering these pieces is a good thing for your IT department and your own career, because they provide a solid foundation for your organization’s path to a Software Defined Data Center. Eventually, you’ll have all your Data Center configuration in text files stored on a Git repository or on a USB drive that you can easily replicate within another data center.

In a series of coming blog posts, we’ll explain how Director has been built to accommodate the business requirements and the challenges of deploying OpenStack and its long-term management. If you are really impatient, remember that we publish all of our documentation in the Red Hat OpenStack Platform documentation portal (link to version 8).

Continue reading “Introduction to Red Hat OpenStack Platform Director”

How connection tracking in Open vSwitch helps OpenStack performance

Written by Jiri Benc,  Senior Software Engineer, Networking Services, Linux kernel, and Open vSwitch

 

 

By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.

Introduction

It goes without question that in the modern world, we need firewalling to protect machines from hostile environments. Any non-trivial firewalling requires you keep track of the connections to and from the machine. This is called “stateful firewalling”. Indeed, even such basic rule as “don’t allow machines from the Internet to connect to the machine while allowing the machine itself to connect to servers on the Internet” requires stateful firewall. This applies also to virtual machines. And obviously, any serious cloud platform needs such protection.

Continue reading “How connection tracking in Open vSwitch helps OpenStack performance”