Neutron, historically known as Quantum, is the OpenStack project focused on delivering networking as a service. As the Juno development cycle ramps up, now is a good time to review some of the key changes we saw in Neutron during this exciting cycle and have a look at what is coming up in the next upstream major release which is set to debut in October.

Neutron or Nova Network?

The original OpenStack Compute network implementation, also known as Nova Network, assumed a basic model of performing all isolation through Linux VLANs and iptables. These are typically sufficient for small and simple networks, but larger customers are likely to have more sophisticated network requirements. Neutron introduces the concept of a plug-in, which is a back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests and offer a rich set of network topologies, including network overlays with protocols like GRE or VXLAN, and network services such as load balancing, virtual private networks or firewalls that plug into OpenStack tenant networks. Neutron also enables third parties to write plug-ins that introduce advanced network capabilities, such as the ability to leverage capabilities from the physical data center network fabric, or use software-defined networking (SDN) approaches with protocols like OpenFlow. One of the main Juno efforts is a plan to enable easier Nova Network to Neutron migration for users that would like to upgrade their networking model for the OpenStack cloud.

Performance Enhancements and Stability

The OpenStack Networking community is actively working on several enhancements to make Neutron a more stable and mature codebase. Among the different enhancements, recent changes to the security-group implementation should result in significant improvement and better scalability of this popular feature. To recall, security groups allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a Neutron port, effectively creating an instance-level firewall filter. You can read this great post by Miguel Angel Ajo, a Red Hat employee who led this effort in the Neutron community, to learn more about the changes.

In addition, there are continuous efforts to improve the upstream testing framework, and to create a better separation between unit tests and functional tests, as well as better testing strategy and coverage for API changes.

Another proposal that is being added to the Neutron project is an incubator effort to develop new features. The new incubator enables new features to mature and develop prior to adoption into the integrated release. The plan is that incubator features only stay in the incubator for two release cycles or fewer before they potentially graduate to a full feature within the project.

L3 High Availability

The neutron-l3-agent is the Neutron component responsible for layer 3 (L3) forwarding and network address translation (NAT) for tenant networks. This is a key piece of the project that hosts the virtual routers created by tenants and allows instances to have connectivity to and from other networks, including networks that are placed outside of the OpenStack cloud, such as the Internet.

In the current reference architecture available using the upstream code, the neutron-l3-agent is placed on a dedicated node or nodes, usually bare-metal machines referred to as “Network Nodes”. Until now, you could have to utilize multiple Network Nodes to achieve load sharing by scheduling different virtual routers on different nodes, but not high availability or redundancy between the nodes. The challenge that this current model brings is the fact that all the routing for the OpenStack cloud happens in a centralized point. This introduce two main concerns:

1. This makes each Network Node a single point of failure (SPOF)
2. Whenever routing is needed, packets from the source instance have to go through a router in the Network Node and then sent to the destination. This centralized routing creates a resource bottleneck and an unoptimized traffic flow

Two Juno efforts are aiming to address these issues; one is a proposal to add high-availability to the Network Nodes, so that when one node is failing the others can take over automatically. This implementation uses the well-known VRRP protocol internally. The second one is to introduce distributed virtual routing (DVR) functionality by placing the neutron-l3-agent on the Compute nodes (hypervisors) themselves. In contrast to the Network Nodes approach, deployment using distributed virtual routing will require external network access on each Compute node.

Ideally, customers will have the option to choose what model best suits their needs, or even to combine between them to enjoy the benefits of each one: distributed virtual routing (DVR) to handle routing within the OpenStack cloud (also known as east-west traffic) as well as 1:1 NAT for floating IPs, and highly-available Network Nodes to handle the centralized source NAT (SNAT) to allow instances to have basic outgoing connectivity, as well as advanced services such as virtual private networks or firewalls - which by design require seeing both directions of the traffic flow in order to operate properly. Assaf Muller, a Red Hat associate who contributes  in this area, covers this in a more detailed fashion in this excellent blog post.

While both of these upstream efforts seems to be interrelated at a first glance, it’s important to mention that during the Juno cycle these were two separate efforts, and combining them into a unified solution as described earlier is something to look for in future releases, and a topic that will be further discussed in the upcoming Kilo Design Summit.

Time for Some IPv6

IPv6 is a critical part of the promise of the cloud. If you want to connect everything to the network, you better plan for massive scale and have enough addresses to use. IPv6 is also increasingly important in the network functions virtualization (NFV) and telecommunication service provider space.

One of the big items that we expect to land in the Juno release is a more complete support for IPv6 networking. This is going to be an important milestone for IPv6 in Neutron, as this topic has been a development focus for the last few cycles and the API layer for supporting IPv6 subnet attributes was already defined before, but Juno would be the first release which actually introduces features on top of that.

The Juno features are mostly concentrated on IPv6 address assignment for tenant instances; while IPv4 is more straight forward when it comes to IP address assignment (and DHCP is by far the most common deployment in production with IPv4), IPv6 offers some more flexibility and options to choose from. Both stateful and stateless DHCPv6 are expected to be supported in OpenStack Neutron for the Juno release, as well as the ability to use Stateless Address Autoconfiguration (SLAAC).

Get Started with OpenStack Neutron

If you want to try out OpenStack, or to check out yourself some of the above enhancements, you are more than welcome to visit our RDO site. We have documentation to help get started, forums where you can connect with other users, and community-supported packages of the most up-to-date OpenStack releases available for download.

If you are looking for enterprise-level support and our partner certification program, Red Hat also offers Red Hat Enterprise Linux OpenStack Platform.