We are happy to announce that Red Hat OpenStack Platform 11 is now Generally Available (GA).

Version 11 is based on the upstream OpenStack release, Ocata, the 15th release of OpenStack. It brings a plethora of features, enhancements, bugfixes, documentation improvements and security updates. Red Hat OpenStack Platform 11 contains the additional usability, hardening and support that all Red Hat releases are known for. And with key enhancements to Red Hat OpenStack Platform’s deployment tool, Red Hat OpenStack Director, deploying and upgrading enterprise, production-ready private clouds has never been easier. 

So grab a nice cup of coffee or other tasty beverage and sit back as we introduce some of the most exciting new features in Red Hat OpenStack Platform 11!

Composable Upgrades

By far, the most exciting addition brought by Red Hat OpenStack Platform 11 is the extension of composable roles to now include composable upgrades.

But first, composable roles

As a refresher, a composable role is a collection of services that are grouped together to deploy the Overcloud’s main components. There are five default roles (Controller, Compute, BlockStorage, ObjectStorage, and CephStorage) allowing most common architectural scenarios to be achieved out of the box. Each service in a composable role is defined by an individual Heat template following a standardised approach that ensures services implement a basic set of input parameters and output values. With this approach these service templates can be more easily moved around, or composed, into a custom role. This creates greater flexibility around service placement and management.

And now, composable upgrades ...

Before composable roles, upgrades were managed via a large set of complex code to ensure all steps were executed properly. By decomposing the services into smaller, standardized modules, the upgrade logic can be moved out of the monolithic and complex script into the service template directly. This is done by a complete refactoring of the upgrade procedure into modular snippets of Ansible code which can then be integrated and orchestrated by Heat. To do this each service’s template has a collection of Ansible plays to handle the upgrade steps and actions. Each Ansible play has a tagged value to allow heat to step through the code and execute in a precise and controlled order. This is the same methodology used by puppet and the “step_config" parameter already found in the “outputs” section of each service template.

Heat iterates through the roles and services and joins the services’ upgrade plays together into a larger playbook. It then executes the plays, by tag, moving through the upgrade procedure.

For example, take a look at Pacemaker’s upgrade_tasks section (from tripleo-heat-templates/puppet/services/pacemaker.yaml):

      upgrade_tasks:
        - name: Check pacemaker cluster running before upgrade
          tags: step0,validation
          pacemaker_cluster: state=online check_and_fail=true
          async: 30
          poll: 4
        - name: Stop pacemaker cluster
          tags: step2
          pacemaker_cluster: state=offline
        - name: Start pacemaker cluster
          tags: step4
          pacemaker_cluster: state=online
        - name: Check pacemaker resource
          tags: step4
          pacemaker_is_active:
            resource: "{{ item }}"
            max_wait: 500
          with_items: {get_param: PacemakerResources}
        - name: Check pacemaker haproxy resource
          tags: step4
          pacemaker_is_active:
            resource: haproxy
            max_wait: 500
          when: {get_param: EnableLoadBalancer}

Heat executes the play for step0, then step1, then step2 and so on. This is just like running ansible-playbook with the -t or --tags option to only run plays tagged with these values.

Composable upgrades help to support trustworthy lifecycle management of deployments by providing a stable upgrade path between supported releases. They offer simplicity and reliability to the upgrade process and the ability to easily control, run and customize upgrade logic in a modular and straightforward way.

Increased “Day 0” HA (Pacemaker) Service placement flexibility

New in version 11, deployments can use composable roles for all services. This means the remaining pacemaker-managed services, such as RabbitMQ and Galera, traditionally required to be collocated on a single controller node, can now be deployed as custom roles to any nodes. This allows operators to move core service layers to dedicated nodes increasing security, scale, and service design flexibility.

Please note: Due to the complex-nature of changing the pacemaker-managed services in an already running Overcloud we recommend consulting Red Hat support services before attempting to do so.

Improvements for NFV

Co-location of Ceph on Compute now supported in production (GA)

Co-locating Ceph on Nova is done by placing the Ceph Object Storage Daemons (OSDs) directly on the compute nodes. Co-location lowers many cost and complexity barriers for workloads that have minimal and/or predictable storage I/O requirements by reducing the number of total nodes required for an OpenStack deployment. Hardware previously dedicated for storage-specific requirements can now be utilized by the compute footprint for increased scale. With version 11 co-located storage is also now fully supported for deployment by director as a composable role. Operators can more easily perform detailed and targeted deployments of co-located storage, including technologies such as SR-IOV, all from a custom role. The process is fully supported with comprehensive documentation and through a newly released reference architecture.

For Telcos, support for co-locating storage can be helpful for optimizing workloads and deployment architectures on a varied range of hardware and networking technologies within a single OpenStack deployment.

VLAN-Aware VMs now supported in production (GA)

A VLAN-aware VM, or more specifically, "Neutron Trunkports," is how an OpenStack instance can support VLAN tagged frames across a single vNIC. This allows an operator to use fewer vNICs to access many separate networks, significantly reducing complexity by reducing the need for one vNIC for each network. Neutron does this by allowing subports off the original parent, effectively turning the main parent port into a virtual trunk. These subports can have their own segmentation id’s assigned directly to them allowing an operator to assign each port its own VLAN.

(Image courtesy of https://wiki.openstack.org/wiki/Neutron/TrunkPort; used under Creative Commons)

Version bumps for key virtual networking technologies

DPDK now version 16.11

DPDK 16.11 brings non-uniform memory access (NUMA) awareness to openvswitch-dpdk deployments. Virtual host devices comprise of multiple different types of memory which should all be allocated to the same physical node. 16.11 uses NUMA awareness to achieve this in some of the following ways:

  • 16.11 removes the requirement for a single device-tracking node which often creates performance issues by splitting memory allocations when VMs are not on that node
  • NUMA ID’s can now be dynamically derived and that information used by DPDK to correctly place all memory types on the same node
  • DPDK now sends NUMA node information for a guest directly to Open vSwitch (OVS) allowing OVS to allocate memory more easily on the correct node
  • 16.11 removes the requirement for poll mode driver (PMD) threads to be on cores of the same NUMA node. PMDs can now be on the same node as a device’s memory allocations

Open vSwitch now version 2.6

OVS 2.6 lays the groundwork for future performance and virtual network requirements required for NFV deployments, specifically in the ovs-dpdk deployment space. Immediate benefits are gained by currency of features and initial, basic OVN support. See the upstream release notes for full details.

CloudForms Integration

Red Hat OpenStack Platform 11 remains tightly integrated with CloudForms. It has been fully tested and supports features such as:

  • Tenant Mapping: finds and lists all OpenStack tenants as CloudForms tenants and they remain in synch. Create, update and delete of CloudForms tenants are reflected in OpenStack and vice-versa
  • Multisite support where one OpenStack region is represented as one cloud provider in CloudForms
  • Multiple domains support where one domain is represented as one cloud provider in CloudForms
  • Cinder Volume Snapshot Management can be done at volume or instance level. A snapshot is a whole new volume and you can instantiate a new instance from it, all from Cloudforms

OpenStack Lifecycle: Our First "Sequential" Release

Long Life review ...

With OSP 10 we introduced the concept of the Long Life release. Long Life releases allow customers who are happy with their current release and without any pressing need for specific feature updates to remain supported for up to five years. We have designated every 3rd release as Long Life. For instance, versions 10, 13, and 16 are Long Life, while versions 11, 12, 14 and 15 are sequential. Long Life releases allow for upgrades to subsequent Long Life releases (for example, 10 to 13 without stepping through 11 and 12). Long Life releases generally have an 18 month cadence (three upstream cycles) and do require additional hardware for the upgrade process. Also, while procedures and tooling will be provided for this type of upgrade, it is important to note that some outages will occur.

Now, Introducing ... Sequential!

Red Hat OpenStack Platform 11 is the first "sequential" release (i.e. N+1). It is supported for one year and is released immediately into a “Production Phase 2” release classification. All upgrades for this type of release must be done sequentially (i.e. N+1). Sequential releases feature tighter integration with upstream projects and allow customers to quickly test new features and to deploy using their own knowledge of continuous integration and agile principles. Upgrades are generally done without major workload interruption and customers typically have multiple datacenters and/or highly demanding performance requirements. For more details see Red Hat OpenStack Platform Lifecycle (detailed FAQ as pdf) and Red Hat OpenStack Platform Director Life Cycle.

Additional notable new features of version 11

A new Ironic inspector plugin can process Link Layer Discovery Protocol (LLDP) packets received from network switches during deployment. This can significantly help deployers to understand the existing network topology during a deployment and reduces trial-and-error by helping to validate the actual physical network setup presented to a deployment. All data is collected automatically and stored in an accessible format in the Undercloud’s Swift install.

There is now full support for collectd agents to be deployed to the Overcloud from director using composable roles. Performance monitoring is now easier to do as collectd joins the other fully supported OpsTools services for availability monitoring (sensu) and log management (fluentd) present starting with version 10.

And please remember, this are agents, not the full server-side implementations. Check out how to implement the server components easily with Ansible by going to the CentOS OpsTools Special Interest Group for all the details.

Additional features landing as Tech Preview

Tech Preview Features should not be implemented in production. For full details please see: https://access.redhat.com/support/offerings/techpreview/

Octavia

Octavia brings a robust and mature LBaaS v2 API driver to OpenStack and will eventually replace the legacy HAProxy namespace driver currently found in Newton. It will become not only a load balancing driver but also the load balancing API hosting all the other drivers. Octavia is a now a top level project outside of Neutron; for more details see this excellent update talk from the recent OpenStack Summit in Boston.

Octavia implements load balancing via a group of virtual machines (or containers or bare metal servers) controlled via a controller called "Amphora." It manages, among other things, the images used for the balancing engine. In Ocata, Amphora introduces image support for Red Hat Enterprise Linux, Centos and Fedora. Amphora images (collectively known as amphorae) utilize HAProxy to implement load balancing. For full details of the design, consult the Component Design document.

To allow Red Hat OpenStack Platform users to try out this new implementation in a non-production environment operators can deploy a Technology Preview with director starting with version 11.

Please Note: Octavia’s director-based implementation is currently scheduled for a z-stream release for Red Hat OpenStack Platform Version 11. This means that while it won’t be available on the day of the release it will be added to it shortly. However, please track the following bugzilla, as things may change at the last moment and affect this timing.

OpenDaylight

Red Hat OpenStack Platform 11 increases ODL support in version 10 by adding deployment of the OpenDaylight Boron SR2 release to director using a composable role.

Ceph block storage replication

The Cinder RADOS block driver (RBD) was updated to support RBD mirroring (promote/demote location) in order to allow customers to support essential concepts in disaster recovery by more easily managing and replicating their data using RBD-mirroring via the Cinder API.

Cinder Service HA 

Until now the cinder-volume service could run only in Active/Passive HA fashion. In version 11, the Cinder service received numerous internal fixes around locks, job distribution, cleanup, and data corruption protection to allow for an Active/Active implementation. Having a highly available Cinder implementation may be useful for uptime reliability and throughput requirements.

To sum it all up

Red Hat OpenStack Platform 11 brings important enhancements to all facets of cloud deployment, operations, and management. With solid and reliable upgrade logic enterprises will find moving to the next version of OpenStack is easier and smoother with a lower chance for disruption. The promotion of important features to full production support (GA) keeps installs current and supported while the introduction of new Technology Preview features gives an accessible glimpse into the immediate future of the Red Hat OpenStack Platform.

More info

For more information about Red Hat OpenStack Platform please visit the technology overview page, product documentation, release notes and release annoucement.

To see what others are doing with Red Hat OpenStack Platform check out these use cases

And don’t forget you can evaluate Red Hat OpenStack Platform for free for 60 days to see all these features in action.