Kilo OpenStack  Kilo, the 11th release of the open source project, was officially released in April, and now is a good time to review some of the changes we saw in the OpenStack Networking (Neutron) community during this cycle, as well as some of the key new networking features introduced in the project.

Scaling the Neutron development community

The Kilo cycle brings two major efforts which are meant to better expand and scale the Neutron development community: core plugin decomposition and advanced services split. These changes should not directly impact OpenStack users but are expected to reduce code footprint, improve feature velocity, and ultimately bring faster innovation speed. Let’s take a look at each individually:

Neutron core plugin decomposition

Neutron, by design, has a pluggable architecture which offers a custom backend implementation of the Networking API. The plugin is a core piece of the deployment and acts as the “glue” between the logical API and the actual implementation. As the project evolves, more and more plugins were introduced, coming from open-source projects and communities (such as Open vSwitch and OpenDaylight), as well as from various vendors in the networking industry (like Cisco, Nuage, Midokura and others). At the beginning of the Kilo cycle, Neutron had dozens of plugins and drivers span from core plugins, ML2 mechanism drivers, L3 service plugins, and L4-L7 service plugins for FWaaS, LBaaS and VPNaaS - the majority of those included directly within the Neutron project repository. The amount of code required to review across those drivers and plugins was growing to the point where it was no longer scaling. The expectation that core Neutron reviewers review code which they had no knowledge of, or could not test due to lack of proper hardware or software setup, was not realistic. This also caused some frustration among the vendors themselves, who sometimes failed to get their plugin code merged on time.

The first effort to improve the situation was to decompose the core Neutron plugins and ML2 drivers from the Neutron repository. The idea is that plugins and drivers will leave only a small “shim” (or proxy) in the Neutron tree, and move all their backend logic out to a different repository, with StackForge being a natural place for that. The benefit is clear: Neutron reviewers can now focus on reviewing core Neutron code, while the vendors and plugin maintainers can now iterate at their own pace. While the specification encouraged vendors to immediately start the decomposition of their plugins, it did not require that all plugins complete decomposition in Kilo timeframe, mainly to allow enough time for the vendors to complete the process.

More information on the process is documented here, with this section dedicated for tracking the progress of the various plugins.

Advanced services split

While the first effort is focused solely on core Neutron plugins and ML2 drivers, a parallel effort was put in place to address similar concerns with the L4-L7 advanced services (FWaaS, LBaaS, and VPNaaS). Similar to the core plugins, advanced services previously stored their code in the main Neutron repository, resulting in lack of focus and reviews by Neutron core reviewers. Starting with Kilo, these services are now split into their own repositories; Neutron now includes four different repositories: one for basic L2/L3 networking, and one each for FWaaS, LBaaS, and VPNaaS. As the number of service plugins is still relatively low, vendors and plugin code will remain in each of the service repositories at this point.

It is important to note here that this change should not affect OpenStack users. Even with the services now split, there is no change to the API or CLI interfaces, and they are all still using the same Neutron client as before. That said, we do see this split laying the foundation for some more deeper changes in the future, with each of the services having the potential to become independent from Neutron and offer their own REST endpoints, configuration file, and CLI/API client. This will enable teams focused exclusively on one or more advanced services to make a bigger impact.

ML2/Open vSwitch port-security

Security-groups are one of the most popular Neutron features, allowing tenants to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a Neutron port, effectively creating a firewall close to the virtual machines (VMs).

As a security measure, Neutron's security group implementation always applies “bonus” default rules automatically to block IP address spoofing attacks, preventing a VM from sending or receiving traffic with a MAC or IP address which does not belong to its Neutron port. While most users find security-groups and the default anti-spoofing rules helpful and necessary to protect their VMs, some asked for the option to turn it off for specific ports. This is mainly required in cases where network functions are running within the VMs, a common use case for network functions virtualization (NFV).

Think for example of a router application deployed within an OpenStack VM; it receives packets that are not necessarily addressed to it and transmits (routes) packets that are not necessarily generated from one of its ports. With security-groups applied, it will not be able to perform these tasks.

Let’s examine the following topology as an example:

pic_1-1

Host 1, with the IPv4 address of 192.168.0.1 wants to reach Host 2, which is configured with 172.16.0.1. The two hosts are connected via two VMs running a router application, and are configured to route between the networks and act as default gateways for the hosts. The MAC addresses of the relevant ports are shown as well.Now, let’s examine the traffic flow when Host 1 is trying to send traffic to Host 2:

  1. Host 1 generates an IPv4 packet with a source of 192.168.0.1 and a destination of 172.16.0.1. As the two hosts are placed on different subnets, R1 will respond to Host 1’s ARP request with its local MAC address, and 3B-2D-B9-9B-34-40 will be used as the destination MAC on the L2 frame.
  1. R1 receives the packet. Note that the destination IP of the packet is 172.16.0.1, which is not assigned to R1. With security-groups enabled on R1’s port and the default anti-spoofing rules applied, the packet will be dropped at this point, and R1 would not be able to route the traffic further.

Prior to Kilo, you could either disable or enable security-groups for the entire cloud. Starting with the Kilo release, it is now possible to enable or disable the security-group feature per port using a new port-security-enabled attribute, so that the tenant admin can decide if and where a firewall is exactly needed in the topology. This new attribute is supported with the Open vSwitch agent (ovs-agent) in conjunction with the IptablesFirewallDriver.

Going back to the previous topology, it is now possible to turn off security-groups on the VMs being used as routers, while keeping them active on the Host VM ports, so that routing can take place properly:

pic_2-2

Some additional information on this feature along with a configuration example can be found on this blog post by Red Hat’s Terry Wilson.

IPv6 enhancements

IPv6 has been a key area of focus in Neutron lately, with major features introduced during the Juno release to allow address assignment for tenant networks using Stateless Address Autoconfiguration (SLAAC) and DHCPv6, as well as support for provider networks with Router Advertisements (RAs) messages generated by an external router. While the IPv6 code base continues to mature, the Kilo release brings several other enhancements, including:

  • The ability to assign multiple IPv6 prefixes for a network

    • With IPv6, it is possible to assign several IP prefixes to a single interface. This is in fact a common configuration, with all interfaces assigned a link-local address (LLA) by default to handle traffic in the local link, and one or more global unicast addresses (GUA) for end-to-end connectivity. Starting with the Kilo release, users can now attach several IPv6 subnets to a network. When the subnet type is either SLAAC or DHCPv6 stateless, one IPv6 address from each subnet will be assigned to the Neutron port.
  • Better IPv6 router support

    • As of Kilo, there is no network address translation (NAT) or floating IP model for IPv6 in OpenStack. The assumption is that VMs are assigned globally routed addresses and can communicate directly using pure L3 routing. The neutron-l3-agent is the component responsible for routing within Neutron, through the creation and maintenance of virtual routers. When it comes to IPv6 support in the virtual routers, two main functions are required:
      1. Inter-subnet routing: this refers to the ability to route packets between different IPv6 prefixes of the same tenant. Since the traffic is routed within the cloud and does not leave for any external system this is usually referred to as “east-west” routing. This is supported since Juno with no major enhancements introduced in Kilo.
      2. External routing: this refers to the ability to route packets between an IPv6 tenant subnet and an IPv6 external subnet. Since the traffic needs to leave the cloud to reach the external network this is usually referred to as “north-south” traffic. As there is no IPv6 NAT support, the virtual router simply needs to route the traffic between the internal subnet and the external one. While this capability was supported since the Juno release, Kilo introduces major improvements to the way the operator needs to provision and create this external network to begin with. It is now not required to create any Neutron subnet for the external network. The virtual router can just automatically learn its default gateway information via SLAAC (if RAs are enabled on the upstream router), or the default route can be manually set by the operator using a new option introduced in the l3-agent configuration file (‘ipv6-gateway’).
  • Extra DHCP options

    • With Neutron, users can specify extra DHCP options for a subnet. This is mainly used to assign additional information such as Domain Name System (DNS) or maximum transmission unit (MTU) size to a given port. Originally, this configuration only worked for DHCPv4 or DHCPv6 addresses on a port. This configuration causes issues when setting up dual-stack designs, where a VM is assigned with both an IPv4 and IPv6 address on the same port.
    • Starting with Kilo, it is now possible to specify extra DHCP options for both DHCPv4 and DHCPv6. A new attribute (‘ip_version’) is used in Neutron port create/update API to specify the IP version (4 or 6) of a given DHCP option.

LBaaS v2 API

Load-Balancing-as-a-Service (LBaaS) is one of the advanced services of Neutron. It allows tenants to create load-balancers on demand, backed by open-source or proprietary service plugins that offer different load balancing technologies. The open source solution available with Red Hat Enterprise Linux OpenStack Platform is based on the HAProxy service plugin.

Version 1.0 of the LBaaS API included basic load balancing capabilities and established a simple, straight-forward flow for setting up a load balancer services.

  1. Create a pool
  2. Create one or more members in the pool
  3. Create health monitors
  4. Create a virtual IP (VIP) that is associated with the pool

This was useful for getting initial implementations and deployments of LBaaS, but it wasn’t intended to be an enterprise-class alternative to a full-blown load-balancer. LBaaS version 2.0 adds capabilities to offer a more robust load-balancing solution, including support for SSL/TLS termination. Accomplishing this required a redesign of the LBaaS architecture, along with the HAProxy reference plugin.

Distributed Virtual Routing (DVR) VLAN support

DVR, first introduced in Juno release, allows the deployment of Neutron routers across the Nova Compute nodes, so that each Compute node handles the routing for its locally hosted VMs. This is expected to result in better performance and scalability of the virtual routers, and is seen as an important milestone towards a more efficient L3 traffic flow in OpenStack.

As a reminder, the default OpenStack architecture with Neutron involves a dedicated cluster of Network nodes that handles most of the network services in the cloud, including DHCP, L3 routing, and NAT. That means that traffic from the Compute node must reach the Network nodes to get routed properly. With DVR, the Compute node can handle itself inter-subnet (east-west) routing as well as NAT for floating IPs. DVR still relies on dedicated Network nodes for the default SNAT service, which allows basic outgoing connectivity for VMs.

Prior to Kilo, distributed routers only supported overlay tunnel networks (i.e. GRE, VXLAN) for tenant separation. This hindered the adoption of the feature as many clouds opted to use 802.1Q VLAN tenant networks. With Kilo, this configuration is now possible and distributed routers may service tunnel networks as well as VLAN networks.

To get more information about DVR, I strongly recommended reading this great three part post from Red Hat’s Assaf Muller, covering: overview and east/west routing, SNAT, and Floating IPs.

View the state of Highly Available routers

One of the major features introduced in the Juno release was the L3 High Availability (L3 HA) solution, which allowed an active/active setup of the neutron-l3-agent across different Network nodes. The solution, based on keepalived, utilizes the Virtual Router Redundancy Protocol (VRRP) protocol internally for forming groups of highly available virtual routers. By design, for each group there is one active router (which forwards traffic), and one or more standby routers (which are waiting to take control in case of a failure of the active one). The scheduling of master/backup routers is done randomly across the different Network nodes, so that the load (i.e. forwarding router instances) is spread among all nodes.

One of the limitations of the Juno-based solution was that Neutron had no way to report the HA router state, which made troubleshooting and maintenance harder. With Kilo, operators may now run the neutron l3-agent-list-hosting-router <router_id> command and see where the active instance is currently hosted.

pic_3-3 Ability to choose a specific floating IP

Floating IPs are public IPv4 addresses that can be dynamically added to a VM instance on the fly, so that the VM can be reachable from external systems, usually the Internet. Originally, when assigning a floating IP for a specific VM, the IP would be randomly picked from a pool and there was no guarantee that a VM would consistently receive the same IP address. Starting with Kilo, the user can now choose a specific floating IP address to be assigned to a given VM by utilizing a new ‘floating_ip_address’ API attribute.

MTU advertisement functionality

This new feature allow specification of the desired MTU for a network, and advertisement of the MTU to guest operating systems when it is set. This new capability will avoid MTU mismatches in networks that lead to undesirable results such as connectivity issues, packet drops and degraded network performance.

 Improved performance and stability

The OpenStack Networking community is actively working to make Neutron a more stable and mature codebase. Among the different performance and stability enhancements introduced in Kilo, I wanted to highlight two: the switch to use OVSDB directly with the ML2/Open vSwitch plugin instead of using Open vSwitch ovs-vsctl CLI commands, and comprehensive refactoring of the l3-agent code base.

While these two changes are not introducing any new feature functionality to users per se, they do represent the continuous journey of improving Neutron’s code base, especially the core L2 and L3 components which are critical to all workloads.

Looking ahead to Liberty

Liberty, the next release of OpenStack, is planned for October 15th, 2015. We are already busy planning and finalizing the sessions for the Design Summit in Vancouver, where new feature and enhancement proposals are scheduled to be discussed. You can view the approved Neutron specifications for Liberty to track what proposals are accepted to the project and expected to land in Liberty.

Get Started with OpenStack Neutron

If you want to try out OpenStack, or to check out some of the above enhancements for yourself, you are welcome to visit our RDO site. We have documentation to help get you started, forums where you can connect with other users, and community-supported packages of the most up-to-date OpenStack releases available for download.

If you are looking for enterprise-level support and our partner certification program, Red Hat also offers Red Hat Enterprise Linux OpenStack Platform.

 

 


The Kilo logo is a trademark/service mark of the OpenStack Foundation. 
Red Hat is not affiliated with, endorsed or sponsored by the OpenStack 
Foundation, or the OpenStack community.
Image source: https://www.openstack.org/software/kilo/