Enabling Keystone’s Fernet Tokens in Red Hat OpenStack Platform

As we learned in part one of this blog post, beginning with the OpenStack Kilo release, a new token provider is now available as an alternative to PKI and UUID. Fernet tokens are essentially an implementation of ephemeral tokens in Keystone. What this means is that tokens are no longer persisted and hence do not need to be replicated across clusters or regions.

“In short, OpenStack’s authentication and authorization metadata is neatly bundled into a MessagePacked payload, which is then encrypted and signed as a Fernet token. OpenStack Kilo’s implementation supports a three-phase key rotation model that requires zero downtime in a clustered environment.” (from: http://dolphm.com/openstack-keystone-fernet-tokens/)

Continue reading “Enabling Keystone’s Fernet Tokens in Red Hat OpenStack Platform”

An Introduction to Fernet tokens in Red Hat OpenStack Platform

Thank you for joining me to talk about Fernet tokens. In this first of three posts on Fernet tokens, I’d like to go over the definition of OpenStack tokens, the different types and why Fernet tokens should matter to you. This series will conclude with some awesome examples of how to use Red Hat Ansible to manage your Fernet token keys in production.

First, some definitions …

What is a token? OpenStack tokens are bearer tokens, used to authenticate and validate users and processes in your OpenStack environment. Pretty much any time anything happens in OpenStack a token is involved. The OpenStack Keystone service is the core service that issues and validates tokens. Using these tokens, users and and software clients via API’s authenticate, receive, and finally use that token when requesting operations ranging from creating compute resources to allocating storage. Services like Nova or Ceph then validate that token with Keystone and continue on with or deny the requested operation. The following diagram, shows a simplified version of this dance.

Screen Shot 2017-12-05 at 12.06.02 pm
Courtesy of the author

Continue reading “An Introduction to Fernet tokens in Red Hat OpenStack Platform”

Using Ansible Validations With Red Hat OpenStack Platform – Part 3

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.

In the final part of our series, we demonstrate how to run those same validations using two new methods: the OpenStack scheduling service, Mistral, and the Red Hat OpenStack director UI.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 3”

Using Ansible Validations With Red Hat OpenStack Platform – Part 2

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).

Next, in Part 2 we demonstrate how to use that dynamic inventory with included, pre-written Ansible validation playbooks from the command line.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 2”

Using Ansible Validations With Red Hat OpenStack Platform – Part 1

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.

With recent releases of Red Hat OpenStack Platform access to Ansible is included directly within the Red Hat OpenStack Platform subscription and installed by default with Red Hat OpenStack Platform director.

In this three-part series you’ll learn ways to use Ansible to perform powerful pre and post deployment validations against your Red Hat OpenStack environment, utilizing the special validation scripts that ship with recent Red Hat OpenStack Platform releases.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 1”

Using Software Factory to manage Red Hat OpenStack Platform lifecycle

by Nicolas Hicher, Senior Software Engineer – Continuous Integration and Delivery

Software-Factory

Software-Factory is a collection of services that provides a powerful platform to build software. It enables the same workflow used to develop OpenStack: using Gerrit for code reviews, Zuul/Nodepool/Jenkins as a CI system, and Storyboard for stories and issues tracker. Also, it ensures a reproducible test environment with ephemeral Jenkins slaves.

In this video, Nicolas Hicher will demonstrate how to use Software-Factory to manage a Red Hat OpenStack Platform 9 lifecycle. We will do a deployment and an update on a virtual environment (within an OpenStack tenant).

Continue reading “Using Software Factory to manage Red Hat OpenStack Platform lifecycle”

Full Stack Automation with Ansible and OpenStack

Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.

In this blog we’ll cover the many use-cases for Ansible, the most popular automation software, with OpenStack, the most popular cloud infrastructure software. We’ll help you understand here how and why you should use Ansible to make your life easier, in what we like to call Full-Stack Automation.ansible openstack automation

Continue reading “Full Stack Automation with Ansible and OpenStack”

Integrating classic IT with cloud-native

This is the fifth and final in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fifth question asked:

What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve?

Mary and Gary write that “We expect that as these next-generation environments evolve, conventional and cloud-native infrastructure and development platforms will extend support for each other. As an example, OpenStack was built as a next-generation cloud-native solution, but it is now adding support for some enterprise features.”

This is the one aspect of integration. Today, it’s useful to draw a distinction between conventional and cloud-native infrastructures in part because they often use different technologies and those technologies are changing at different rates. However, as projects/products that are important for many enterprise cloud-native deployments–such as OpenStack–mature, they’re starting to adopt features associated with enterprise virtualization and enterprise management.

Continue reading “Integrating classic IT with cloud-native”

Why cloud-native depends on modernization

This is the fourth in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fourth question asked:

question asked:

What about existing conventional applications and infrastructure? Is it worth the time and effort to continue to modernize and upgrade conventional systems?

In an earlier post in this series, I discussed how both the economics and disruption associated with the wholesale replacement of existing IT systems makes it infeasible under most circumstances. In their answer to this question, Mary and Gary highlight the need for these existing systems to work together with new applications. As they put it: “Much of the success of cloud-native applications will depend on how well conventional systems can integrate with modern applications and support the integration and performance requirements of cloud-native developers.”

Continue reading “Why cloud-native depends on modernization”

How cloud-native needs cultural change

This is the third in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The third question asked:

How will IT management skills, tools, and processes need to change [with the introduction of cloud-native architectures]?

Mary and Gary note that the move to hybrid architectures “switches the IT operations team’s priorities from maintaining specific components to ensuring the delivery of end-to-end services measured in terms of service-level agreements (SLAs).” They also note that there’s a huge cultural element. For example, “Line-of-business stakeholders will have to partner with IT operations and development staff, either individually or as part of collaborative DevOps groups, to ensure that services are implemented as expected and that test-and-release cycles are well integrated.

Continue reading “How cloud-native needs cultural change”

  • Page 1 of 2
  • 1
  • 2
  • >