Network troubleshooting can be hard. Network troubleshooting in a complex distributed system like OpenStack can be even harder. With a typical Neutron deployment using the Open vSwitch (OVS) plug-in, one can expect rich networking configurations on the Red Hat Enterprise Linux OpenStack Platform nodes, the Compute and Controller nodes in particular.

While the network implementation details are well hidden from the end customer (who interfaces with the Neutron API or the Horizon Dashboard), the actual backend implementation involves the creation of various Linux devices, bridges, tunnel interfaces, and network namespaces. This is where the “magic” happens, and how OpenStack tenants can create and consume network resources such as networks, IP subnets and virtual routers, and get proper communication for their applications.

One of the skills an OpenStack administrator must have is the ability to effectively troubleshoot network problems. The most important part of troubleshooting any problem is to divide the tasks of problem resolution into a systematic process. And while there are several approaches to network troubleshooting, it is well understood that two fundamental steps are always required first:

  1. Define the problem
  2. Gather detailed information

Interesting enough, these straight-forward steps are sometimes the most complicated and time consuming. Maintaining up-to-date information about the network topology and configuration has always been a challenge for network administrators, both from the physical network perspective - the switches, routers and other network devices in the network, as well as for the “virtual” domain - comprised of the server and hypervisor OpenStack nodes. With a self-service environment like OpenStack, where configuration changes are made on the fly, this is not getting any easier. That, of course, makes step 2 above a real challenge, and administrators can spend a lot of time trying to understand the actual network setup.

Enter ‘plotnetcfg’

To address that, the plotnetcfg tool has recently been added to RHEL OpenStack Platform 7. This simple yet powerful tool provides a useful method for visualising the network configuration on a host, with the ultimate goal of providing a real-time snapshot of the various network settings.

It can be used to capture the host network configuration quickly and reliably, and save the administrator the process of running different manual commands (such as “ip netns”, “ip address”, “ovs-vsctl show” and others) just to understand the network layout. Note that the tool is limited to one host at a time, but once output is generated from relevant hosts, an administrator should be able to correlate key information more easily.

There are two main ways in which the tool can be utilized in a RHEL OpenStack Platform system:

  • As an integrated part of the sosreport command - so that each time an administrator uses sosreport to collect diagnostic information from a Red Hat Enterprise Linux system, the plotnetcfg output is now included as well. This is usually done to provide Red Hat’s Technical Support organization with useful information when submitting a support ticket.
  • By running the plotnetcfg command directly from the CLI. The command is very simple to use and plots a visual diagram of the configuration hierarchy. It is also possible to save the output as a PDF file for later analysis by running plotnetcfg | dot -Tpdf > file.pdf.

Example

To see the tool in action, let’s examine some example outputs from a typical RHEL OpenStack Platform environment. This environment is running Neutron with ML2 as the core plug-in and the Open vSwitch (OVS) mechanism driver.

compute1.png

This one was taken out of a RHEL OpenStack Platform Compute node. It’s easy to see the OVS integration bridge (br-int) which is the basic bridge connecting all VMs on the host. We can also see:

  •  The “qbr” Linux bridge devices connected to br-int together with the “qvo” and ”qvb” veth pairs. All of these are required to apply iptables rules to backend Security Groups.
  • The VLAN tags being used locally on this host to provide separation between VMs of different tenants.

This second example was taken from the Controller node, which also hosts the Neutron L3 and DHCP services. We can see the OVS external bridge (br-ex) which is used to provide connectivity to/from the OpenStack cloud. We can also see:net1.png

  • A “qrouter” network namespace (ip netns) representing a Neutron router. The “qg” OVS interface Inside the namespace is where the tenant facing default-gateway address is configured.
  • “qdhcp” network namespaces (ip netns) being used to provide IP address assignment for Neutron subnets. This is where we would expect a dnsmasq process to run and serve VM instances with DHCP information.
  • The VLAN tags being used locally on this host to provide separation between router or DHCP namespaces of different tenants.

Want to try it out for yourself? Sign-up for an evaluation of Red Hat Enterprise Linux OpenStack Platform today! We would love to get your feedback on this feature as we work on integrating more troubleshooting capabilities into RHEL OpenStack Platform. Simply visit the Red Hat Customer Portal and open a new ticket.