Red Hat OpenStack Platform 12 Is Here!

We are happy to announce that Red Hat OpenStack Platform 12 is now Generally Available (GA).

This is Red Hat OpenStack Platform’s 10th release and is based on the upstream OpenStack release, Pike.

Red Hat OpenStack Platform 12 is focused on the operational aspects to deploying OpenStack. OpenStack has established itself as a solid technology choice and with this release, we are working hard to further improve the usability aspects and bring OpenStack and operators into harmony.

Logotype_RH_OpenStackPlatform_RGB_Black (1)

With operationalization in mind, let’s take a quick look at some the biggest and most exciting features now available.

Continue reading “Red Hat OpenStack Platform 12 Is Here!”

Enabling Keystone’s Fernet Tokens in Red Hat OpenStack Platform

As we learned in part one of this blog post, beginning with the OpenStack Kilo release, a new token provider is now available as an alternative to PKI and UUID. Fernet tokens are essentially an implementation of ephemeral tokens in Keystone. What this means is that tokens are no longer persisted and hence do not need to be replicated across clusters or regions.

“In short, OpenStack’s authentication and authorization metadata is neatly bundled into a MessagePacked payload, which is then encrypted and signed as a Fernet token. OpenStack Kilo’s implementation supports a three-phase key rotation model that requires zero downtime in a clustered environment.” (from: http://dolphm.com/openstack-keystone-fernet-tokens/)

Continue reading “Enabling Keystone’s Fernet Tokens in Red Hat OpenStack Platform”

An Introduction to Fernet tokens in Red Hat OpenStack Platform

Thank you for joining me to talk about Fernet tokens. In this first of three posts on Fernet tokens, I’d like to go over the definition of OpenStack tokens, the different types and why Fernet tokens should matter to you. This series will conclude with some awesome examples of how to use Red Hat Ansible to manage your Fernet token keys in production.

First, some definitions …

What is a token? OpenStack tokens are bearer tokens, used to authenticate and validate users and processes in your OpenStack environment. Pretty much any time anything happens in OpenStack a token is involved. The OpenStack Keystone service is the core service that issues and validates tokens. Using these tokens, users and and software clients via API’s authenticate, receive, and finally use that token when requesting operations ranging from creating compute resources to allocating storage. Services like Nova or Ceph then validate that token with Keystone and continue on with or deny the requested operation. The following diagram, shows a simplified version of this dance.

Screen Shot 2017-12-05 at 12.06.02 pm
Courtesy of the author

Continue reading “An Introduction to Fernet tokens in Red Hat OpenStack Platform”

Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part One

An exciting new feature in Red Hat OpenStack Platform 11 is full Red Hat OpenStack Platform director support for deploying Red Hat Ceph storage directly on your overcloud compute nodes. Often called hyperconverged, or HCI (for Hyperconverged Infrastructure), this deployment model places the Red Hat Ceph Storage Object Storage Daemons (OSDs) and storage pools directly on the compute nodes.

Co-locating Red Hat Ceph Storage in this way can significantly reduce both the physical and financial footprint of your deployment without requiring any compromise on storage.

opwithtoolsinside

Continue reading “Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part One”

Using Ansible Validations With Red Hat OpenStack Platform – Part 3

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.

In the final part of our series, we demonstrate how to run those same validations using two new methods: the OpenStack scheduling service, Mistral, and the Red Hat OpenStack director UI.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 3”

Using Ansible Validations With Red Hat OpenStack Platform – Part 2

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).

Next, in Part 2 we demonstrate how to use that dynamic inventory with included, pre-written Ansible validation playbooks from the command line.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 2”

Using Ansible Validations With Red Hat OpenStack Platform – Part 1

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.

With recent releases of Red Hat OpenStack Platform access to Ansible is included directly within the Red Hat OpenStack Platform subscription and installed by default with Red Hat OpenStack Platform director.

In this three-part series you’ll learn ways to use Ansible to perform powerful pre and post deployment validations against your Red Hat OpenStack environment, utilizing the special validation scripts that ship with recent Red Hat OpenStack Platform releases.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 1”

Using Software Factory to manage Red Hat OpenStack Platform lifecycle

by Nicolas Hicher, Senior Software Engineer – Continuous Integration and Delivery

Software-Factory

Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.

In this video, Nicolas Hicher will demonstrate how to use Software-Factory to manage a Red Hat OpenStack Platform 9 lifecycle. We will do a deployment and an update on a virtual environment (within an OpenStack tenant).

Continue reading “Using Software Factory to manage Red Hat OpenStack Platform lifecycle”

Install your OpenStack Cloud before lunchtime

Figure 1. The inner workings of QuickStart Cloud Installer

What if I told you that you can have your OpenStack Cloud environment setup before you have to stop for lunch?

Would you be surprised?

Could you do that today?

In most cases I am betting your answer would be not possible, not even on your best day. Not to worry, a solution is here and it’s called the QuickStart Cloud Installer (QCI).

Let’s take a look at the background of where this Cloud tool came from, how it evolved and where it is headed.

 

Born from need

As products like Red Hat Cloud Suite emerge onto the technology scene, it exemplifies the need for companies to be able to support infrastructure and application development use cases such as the following:

Continue reading “Install your OpenStack Cloud before lunchtime”

TripleO (Director) Components in Detail

In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.

The major benefit of utilising these existing API’s with Director is that they’re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it’s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.

With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that’s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.

undercloud vs overcloud

Continue reading “TripleO (Director) Components in Detail”

  • Page 1 of 2
  • 1
  • 2
  • >