A Closer Look at RHEL OpenStack Platform 6

Last week we announced the release of Red Hat Enterprise Linux OpenStack Platform 6, the latest version of our cloud solution providing a foundation for production-ready cloud. Built on Red Hat Enterprise Linux 7 this latest release is intended to provide a foundation for building OpenStack-powered clouds for advanced cloud users. Lets take a deeper dive into some of the new features on offer!

IPv6 Networking Support

IPv6 is a critical part of the promise of the cloud. If you want to connect everything to the network, you better plan for massive scale and have enough addresses to use. IPv6 is also increasingly important in the network functions virtualization (NFV) and telecommunication service provider space.

This release introduces support for IPv6 address assignment for tenant instances including those that are connected to provider networks; while IPv4 is more straight forward when it comes to IP address assignment, IPv6 offers some more flexibility and options to choose from. Both stateful and stateless DHCPv6 are supported, as well as the ability to use Stateless Address Autoconfiguration (SLAAC).

Neutron Routers High Availability

The neutron-l3-agent is the Neutron component responsible for layer 3 (L3) forwarding and network address translation (NAT) for tenant networks. This is a key piece of the project that hosts the virtual routers created by tenants and allows instances to have connectivity to and from other networks, including networks that are placed outside of the OpenStack cloud, such as the Internet.

Historically the neutron-l3-agent has been placed on a dedicated node or nodes, usually bare-metal machines referred to as “Network Nodes”. Until now, you could have to utilize multiple Network Nodes to achieve load sharing by scheduling different virtual routers on different nodes, but not high availability or redundancy between the nodes. The challenge that this model presented was that all the routing for the OpenStack happened in a centralized point. This introduced two main concerns:

  1. This makes each Network Node a single point of failure (SPOF)
  2. Whenever routing is needed, packets from the source instance have to go through a router in the Network Node and then sent to the destination. This centralized routing creates a resource bottleneck and an unoptimized traffic flow

This release endeavours to address these issues by adding high-availability to the virtual routers scheduled on the Network Nodes, so that when one router is failing, another can take over automatically. This is implemented using the well-known VRRP protocol internally. Highly-available Network Nodes are able to handle routing and centralized source NAT (SNAT) to allow instances to have basic outgoing connectivity, as well as advanced services such as virtual private networks or firewalls – which by design require seeing both directions of the traffic flow in order to operate properly.

Single root I/O virtualization (SR-IOV) networking

The ability to pass physical devices through to virtual machine instances, allowing for premium cloud flavors that provide physical hardware such as dedicated network interfaces or GPUs, was originally introduced in Red Hat Enterprise Linux OpenStack Platform 4. This release adds an SR-IOV mechanism driver (sriovnicswitch) to OpenStack networking to provide enhanced support for passing through networking devices that support SR-IOV.

This driver is available starting with Red Hat Enterprise Linux OpenStack Platform 6 and requires an SR-IOV enabled NIC on the Compute node. This driver allows for the assignment of SR-IOV VFs (Virtual Functions) directly into VM instances, so that the VM is communicating directly with the NIC controller, effectively bypassing the vSwitch . The Nova scheduler has also been enhanced to be able to consider not only device availability but the related external network connectivity when placing instances with specific networking requirements included in their boot request.

Support for Multiple Identity Backends

OpenStack Identity (Keystone) is usually integrated with an existing identity management system such as an LDAP server, when used in production environments. Using the default SQL identity backend is not an ideal choice for identity management, as it only provides basic password authentication, it lacks password policy support, and the user management capabilities are fairly limited. Configuring Keystone to use an existing identity store has its challenges, but some of the changes in RHEL OpenStack Platform 6 make this easier. RHEL OpenStack Platform 5 and earlier supported configuring Keystone with only one single identity backend. This means that all service accounts and all OpenStack users had to exist on the same identity management system. In real-world production scenarios, it is commonly required to use the identity store in read-only configuration, not intruding schema or use account changes, so accounts would be managed using native tools. Previously one of the challenges was that the OpenStack service accounts had to be stored on the same LDAP server with rest of the user accounts. In RHEL OpenStack Platform 6, it is possible to configure Keystone to use multiple identity backends. This allows Keystone to use an LDAP server to store normal user accounts and use SQL backend for storing OpenStack service accounts. In addition, this allows multiple LDAP servers to be used by a single Keystone instance when using Keystone Domains which previously worked only with the SQL identity backend.

Tighter Ceph Integration

The availability of Red Hat Enterprise Linux OpenStack Platform 6, based on OpenStack Juno, marks a particularly important milestone for Red Hat through the delivery of Ceph Enterprise 1.2 as a complete storage solution for Nova, Cinder, and Glance for virtual machine requirements.

This release introduces an advanced support for ephemeral and persistent storage featuring thin provisioning, snapshots, cloning, and copy-on-write.

  • With RHEL OpenStack Platform 6, VM storage functions can now be delivered transparently to the user on Ceph as customers can now run diskless compute nodes.
  • The new Ceph-backed ephemeral volumes enable the data to remain situated within the Ceph cluster allowing the VM to boot more quickly without data moving across the network. This also means that snapshots of the ephemeral volume can be performed on the Ceph cluster instantaneously and then put into the Glance library, without data migration across the network. Now VM storage functions can be delivered transparently to the user on Ceph.

The Ceph RBD drivers are now shipped by default with RHEL OpenStack Platform 6 and configured through a single, integrated installer that simplifies and speeds deployment of Ceph as part of the OpenStack deployment.

Interested in trying the latest OpenStack-based cloud platform from the world’s leading provider of open source solutions? Download a free evaluation at: http://www.redhat.com/en/technologies/linux-platforms/openstack-platform.