Voting Open for OpenStack Summit Tokyo Submissions: OpenStack for the Enterprise

In the lead up to OpenStack Summit Hong Kong, the last OpenStack Summit held in the Asia-Pacific region, Radhesh Balakrishnan – General Manager for OpenStack at Red Hat – defined this site as the place to follow us on our journey taking community projects to enterprise products and solutions.

We are excited to now be preparing to head back to the Asia-Pacific region for OpenStack Summit Tokyo – October 27-30 – to share just how far we have come on that journey with host of session proposals focussing on enterprise requirements and the success of OpenStack in this space. The OpenStack Foundation manages voting by allowing its members to choose the topics and presentations they would like to see.

To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Vote for your favorites by midnight Pacific Standard Time on July 30th and we will see you in Tokyo!

Is OpenStack ready for the enterprise? Is the enterprise ready for OpenStack?

Can I use OpenStack to build an enterprise cloud?
  • Alessandro Perilli – General Manager, Cloud Management Strategies @ Red Hat
Elephant in the Room: What’s the TCO for an OpenStack cloud?
  • Massimo Ferrari – Director, Cloud Management Strategy @ Red Hat
  • Erich Morisse – Director, Cloud Management Strategy @ Red Hat
The Journey to Enterprise Primetime
  • Arkady Kanevsky – Director of Development @ Dell
  • Das Kamhout – Principal Engineer @ Intel
  • Fabio Di Nitto – Manager, Software Engineering @ Red Hat
  • Nick Barcet – Director of OpenStack Product Management @ Red Hat
Organizing IT to Deliver OpenStack
  • Brent Holden – Chief Cloud Architect @ Red Hat
  • Michael Solberg – Chief Field Architect @ Red Hat
How Customers use OpenStack to deliver Business Applications
  • Matthias Pfützner – Cloud Solution Architect @ Red Hat
Stop thinking traditional infrastructure – Think Cloud! A recipe to build a successful cloud environment
  • Laurent Domb – Cloud Solution Architect @ Red Hat
  • Narendra Narang – Cloud Storage Solution Architect @ Red Hat
Breaking the OpenStack Dream – OpenStack deployments with business goals in mind
  • Laurent Domb – Cloud Solution Architect @ Red Hat
  • Narendra Narang – Cloud Storage Solution Architect @ Red Hat

Enterprise Success Stories

OpenStack for robust and reliable enterprise private cloud: An analysis of current capabilities, gaps, and how they can be addressed.
  • Tushar Katarki – Integration Architect @ Red Hat
  • Rama Nishtala – Architect @ Cisco
  • Nick Gerasimatos – Senior Director of Cloud Services – Engineering @ FICO
  • Das Kamhout – Principal Engineer @ Intel
Verizon’s NFV Learnings
  • Bowen Ross – Global Account Manager @ Red Hat
  • David Harris – Manager, Network Element Evolution Planning @ Verizon
Cloud automation with Red Hat CloudForms: Migrating 1000+ servers from VMWare to OpenStack
  • Lan Chen – Senior Consultant @ Red Hat
  • Bill Helgeson – Principal Domain Architect @ Red Hat
  • Shawn Lower – Enterprise Architect @ Red Hat

Solutions for the Enterprise

RHCI: A comprehensive Solution for Private IaaS Clouds
  • Todd Sanders – Director of Engineering @ Red Hat
  • Jason Rist – Senior Software Engineer @ Red Hat
  • John Matthews – Senior Software Engineer @ Red Hat
  • Tzu-Mainn Chen – Senior Software Engineer @ Red Hat
Cisco UCS Integrated Infrastructure for Red Hat OpenStack
  • Guil Barros – Principal Product Manager, OpenStack @ Red Hat
  • Vish Jakka – Product Manager, UCS Solutions @ Cisco
Cisco UCS & Red Hat OpenStack: Upstream Partnership to Streamline OpenStack
  • Guil Barros – Principal Product Manager, OpenStack @ Red Hat
  • Vish Jakka – Product Manager, UCS Solutions @ Cisco
  • Arek Chylinski – Technologist @ Intel
Deploying and Integrating OpenShift on Dell’s OpenStack Cloud Reference Architecture
  • Judd Maltin – Systems Principal Engineer @ Dell
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat
Scalable and Successful OpenStack Deployments on FlexPod
  • Muhammad Afzal – Architect, Engineering @ Cisco
  • Dave Cain Reference Architect and Technical Marketing Engineer @ NetApp
Simplifying Openstack in the Enterprise with Cisco and Red Hat
  • Karthik Prabhakar – Global Cloud Technologist @ Red Hat
  • Duane DeCapite – Director of Product Management, OpenStack @ Cisco
It’s a team sport: building a hardened enterprise ecosystem
  • Hugo Rivero – Senior Manager, Ecosystem Technology Certification @ Red Hat
Dude, this isn’t where I parked my instance!?
  • Steve Gordon – Senior Technical Product Manager, OpenStack @ Red Hat
Libguestfs: the ultimate disk-image multi-tool
  • Luigi Toscano – Senior Quality Engineer @ Red Hat
  • Pino Toscano – Software Engineer @ Red Hat
Which Third party OpenStack Solutions should I use in my Cloud?
  • Rohan Kande – Senior Software Engineer @ Red Hat
  • Anshul Behl – Associate Quality Engineer @ Red Hat

Securing OpenStack for the Enterprise

Everything You Need to Know to Secure an OpenStack Cloud (but Were Afraid to Ask)
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat
  • Ted Brunell – Senior Solution Architect @ Red Hat
Towards a more Secure OpenStack Cloud
  • Paul Lancaster – Strategic Partner Development Manager @ Red Hat
  • Malini Bhandaru – Architect & Engineering Manager @ Intel
  • Dan Yocum – Senior Operations Manager, Red Hat
Hands-on lab: configuring Keystone to trust your favorite OpenID Connect Provider.
  • Pedro Navarro Perez – Openstack Specialized Solution Architect @ Red Hat
  • Francesco Vollero – Openstack Specialized Solution Architect @ Red Hat
  • Pablo Sanchez – Openstack Specialized Solution Architect @ Red Hat
Securing OpenStack with Identity Management in Red Hat Enterprise Linux
  • Nathan Kinder – Software Engineering Manager @ Red Hat
Securing your Application Stacks on OpenStack
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat
  • Diane Mueller – Director, Community Development for OpenShift @ Red Hat

Celebrating Kubernetes 1.0 and the future of container management on OpenStack

This week, together with Google and others we celebrated the launch of Kubernetes 1.0 at OSCON in Portland as well as the launch of the Cloud Native Computing Foundation or CNCF (https://cncf.io/), of which Red Hat, Google, and others are founding members. Kubernetes is an open source system for managing containerized applications providing basic mechanisms for deployment, maintenance, and scaling of applications. The project was originally created by Google and is now developed by a vibrant community of contributors including Red Hat.

As a leading contributor to both Kubernetes and OpenStack it was also recently our great pleasure to welcome Google to the OpenStack Foundation. We look forward to continuing to work with Google and others on combining the container orchestration and management capabilities of Kubernetes with the infrastructure management capabilities of OpenStack.

Red Hat has invested heavily in Kubernetes since joining the project shortly after it was launched in June 2014, and are now the largest corporate contributor of code to the project other than Google itself. The recently announced release of Red Hat’s platform-as-a-service offering, OpenShift v3, is built around Kubernetes as the framework for container orchestration and management.

As a founding member of the OpenStack Foundation we have been working on simplifying the task of deploying and managing container hosts – using Project Atomic –  and configuring a Kubernetes cluster on top of OpenStack infrastructure using the Heat orchestration engine.

To that end Red Hat engineering created the heat-kubernetes orchestration templates to help accelerate research and development into providing deeper integration between Kubernetes and the underlying OpenStack infrastructure. The templates continue to evolve to include coverage for other aspects of container workload management such as auto-scaling and were recently demonstrated at Red Hat summit:

The heat-kubernetes templates were also ultimately leveraged in bootstrapping the OpenStack Magnum project which provides an OpenStack API for provisioning container clusters using underlying orchestration technologies including Kubernetes. The aim of this is to make containers first class citizens within OpenStack just like virtual machines and bare-metal before them, with the ability to share tenant infrastructure resources (e.g. networking and storage) with other OpenStack-managed virtual machines, baremetal hosts, and the containers running on them. Providing this level of integration requires providing or expanding OpenStack implementations of existing Kubernetes plug-in points as well as defining new plug-in APIs where necessary while maintaining the technical independence of the solution. All this must be done while allowing application workloads to remain independent of the underlying infrastructure and allowing for true open hybrid cloud operation. Similarly on the OpenStack side additional work is required so that the infrastructure services are able to support the use cases presented by container-based workloads and remove redundancies between the application workloads and the underlying hardware to optimize performance while still providing for secure operation.

Containers on OpenStack Architecture

Magnum, and the OpenStack Containers Team, provide a focal point to coordinate these research and development efforts across multiple upstream projects as well as other projects within the OpenStack ecosystem itself to achieve the goal of providing a rich container-based experience on OpenStack infrastructure.

As a leading contributor to both OpenStack and Kubernetes we at Red Hat look forward to continuing to work on increased integration with both the OpenStack and Kubernetes communities and our technology partners at Google as these exciting technologies for managing the “data-centers of the future” converge.

Containerize OpenStack with Docker

Written by: Ryan Hallisey

Today in the cloud space, a lot of buzz in the market stems from Docker and providing support for launching containers on top of an existing platform. However, what is often overlooked is the use of Docker to improve deployment of the infrastructure platforms themselves; in other words, the ability to ship your cloud in containers.


Hallisay-fig1

 

Ian Main and I took hold of a project within the OpenStack community to address this unanswered question: Project Kolla. Being one of the founding members and core developers for the project, I figured we should start by using Kolla’s containers to get this work off the ground. We began by deploying containers one by one in an attempt to get a functioning stack. Unfortunately, not all of Kolla’s containers were in great shape and they were being deployed by Kubernetes. First, we decided to get the containers working, then deal with how they’re managed later. In the short term, we used a bash script to launch our containers, but it got messy as Kubernetes was opening up ports to the host and declaring environment variables for the containers, and we needed to do the same. Eventually, we upgraded the design to use an environment file that was populated by a script, which proved to be more effective. This design was adopted by Kolla and is still being used today[1].

With our setup script intact, we started a hierarchical descent though the OpenStack services, starting with MariaDB, RabbitMQ, and Keystone. Kolla’s containers were in great shape for these three services, and we were able to get them working relatively quickly. Glance was next, and it proved to be quite a challenge. Quickly, we learned that the Glance API container and Keystone were causing one another to fail.

Hallisay-fig1

 

The culprit was that Glance API and Keystone containers were racing to see which could create the admin user first. Oddly enough, these containers worked with Kubernetes, but I then realized Kubernetes restarts containers until they succeed, avoiding the race conditions we were seeing. To get around this, we made Glance and the rest of the services wait for Keystone to be active before they start. Later, we pushed this design into Kolla, and learned that Docker has a restart flag that will force containers to restart if there is an error.[2] We added the restart flag to our design so that containers will be independent of one another.

The most challenging service to containerize was Nova. Nova presented a unique challenge not only because it was made up of the most number of containers, but because it required the use of super privileged containers. We started off using Kolla’s containers, but quickly learned there were many components missing. Most significantly, the Nova Compute and Libvirt containers were not mounting the correct host’s directories, exposing us to one of the biggest hurdles when containerizing Nova: persistent data and making sure instances still exist after you kill the container. In order for that to work, Nova Compute and Libvirt needed to mount /var/lib/nova and /var/lib/libvirt from the host into the container. That way, the data for the instances is stored on the host and not in the container[3].

 

echo Starting nova compute

docker run -d –privileged \

            –restart=always \

            -v /sys/fs/cgroup:/sys/fs/cgroup \

            -v /var/lib/nova:/var/lib/nova \

            -v /var/lib/libvirt:/var/lib/libvirt \

            -v /run:/run \

            -v /etc/libvirt/qemu:/etc/libvirt/qemu \

            –pid=host –net=host \

            –env-file=openstack.env kollaglue/centos-rdo-nova-compute-nova:latest

 

A second issue we encountered when trying to get the Nova Compute container working was that we were using an outdated version of Nova. The Nova Compute container was using Fedora 20 packages, while the other services were using Fedora 21. This was our first taste of having to do an upgrade using containers. To fix the problem, all we had to do was change where Docker pulled the packages from and rebuild the container, effectively a one line change in the Dockerfile:

From Fedora:20

MAINTAINER Kolla Project (https://launchpad.net/kolla)

To

From Fedora:21

MAINTAINER Kolla Project (https://launchpad.net/kolla)

OpenStack services have independent lifecycles making it difficult to perform rolling upgrades and downgrades. Containers can bridge this gap by providing an easy way to handle upgrading and downgrading your stack.

Once we completed our maintenance on the Kolla containers, we turned our focus to TripleO[4]. TripleO is a project in the OpenStack community that aims to install and manage OpenStack. The name TripleO means OpenStack on OpenStack, where it deploys a so called undercloud, and uses that OpenStack setup to deploy an overcloud, also known as the user cloud.

Our goal was to use the undercloud to deploy a containerized overcloud on bare metal. In our design, we chose to deploy our overcloud on top of Red Hat Enterprise Linux Atomic Host[5]. Atomic is a bare bones Red Hat Enterprise Linux-based operating system that is designed to run containers. This was a perfect fit because it’s a bare and simple environment with nice set of tools for launching containers.

 

[heat-admin@t1-oy64mfeu2t3-0-zsjhaciqzvxs-controller-twdtywfbcxgh ~]$ atomic –help

Atomic Management Tool

positional arguments:

{host,info,install,stop,run,uninstall,update}

commands

host                            execute Atomic host commands

info                             display label information about an image

install                          execute container image install method

stop                            execute container image stop method

run                               execute container image run method

uninstall                      execute container image uninstall method

update                        pull latest container image from repository

optional arguments:

-h, –help                  show this help message and exit

 

Next, we had help from Rabi Mishra in creating a Heat hook that would allow Heat to orchestrate container deployment. Since we’re on Red Hat Enterprise Linux Atomic Host, the hook was running in a container and it would start the heat agents; thus allowing for heat to communicate with Docker[6]. Now we had all the pieces we needed.

In order to integrate our container work with TripleO, it was best for us to copy Puppet’s overcloud deployment implementation and apply our work to it. For our environment, we used devtest, the TripleO developer environment, and started to build a new Heat template. One of the biggest differences between using containers and Puppet, was that Puppet required a lot of setup and config to make sure dependencies were resolved and services were being properly configured. We didn’t need any of that. With Puppet, the dependency list looked like[7]:

 

puppetlabs-apache
puppet-ceph

44 packages later…

puppet-openstack_extras
puppet-tuskar

 

With Docker, we were able to replace all of that with:

 

atomic install kollaglue/centos-rdo-<service>

 

We were able to use a majority of the existing environment, but now starting services was significantly simplified.

Unfortunately, we were unable to get results for some time because we struggled to deploy a bare metal Red Hat Enterprise Linux Atomic Host instance. After consulting Lucas Gomes on Red Hat’s Ironic (bare metal deployment service) team, we learned that there was an easier way to accomplish what we were trying to do. He pointed us in the direction of a new feature in Ironic that added support for full image deployment[8]. Although there was a bug in Ironic when using the new feature, we fixed it and started to see our Red Hat Enterprise Linux Atomic Host running. Now that we were past this, we could finally create images and add users, but Nova Compute and Libvirt didn’t work. The problem was that Red Hat Enterprise Linux Atomic Host wasn’t loading the kernel modules for kvm. On top of that, Libvirt needed proper permission to access /dev/kvm and wasn’t getting it.

 

#!/bin/sh

 

chmod 660 /dev/kvm

            chown root:kvm /dev/kvm


echo “Starting libvirtd.”
exec /usr/sbin/libvirtd

 

Upon fixing these issues, we could finally spawn instances. Later, these changes were adopted by Kolla because they represented a unique case that could cause Libvirt to fail[9].

To summarize, we created a containerized OpenStack solution inside of the TripleO installer project, using the containers from the Kolla project. We mirrored the TripleO workflow by using the undercloud (management cloud) to deploy most of the core services in the overcloud (user cloud), but now those services are containerized. The services we used were Keystone, Glance, and Nova; with services like Neutron, Cinder, and Heat soon to follow. Our new solution uses Heat (the orchestration service) to deploy the containerized OpenStack services onto Red Hat Enterprise Linux Atomic Host, and has the ability to plug right into the TripleO-heat-templates. Normally, Puppet is used to deploy an overcloud, but now we’ve proven you can use containers. What’s really unique about this, is that now you can shop for your config in the Docker Registry instead of having to go through Puppet to setup your services. This allows for you to pull down a container where your services come with the configuration you need. Through our work, we have shown that containers are an alternative deployment method within TripleO that can simplify deployment and add choice about how your cloud is installed.

The benefits of using Docker in a regular application are the same as having your cloud run in containers; reliable, portable, and easy life cycle management. With containers, lifecycle management greatly improves TripleO’s existing solution. The upgrading and downgrading process of an OpenStack service becomes far simpler; creating faster turnaround times so that your cloud is always running the latest and greatest. Ultimately, this solution provides an additional method within TripleO to manage the cloud’s upgrades and downgrades, supplementing the solution TripleO currently offers.

Overall, integrating with TripleO works really well because OpenStack provides powerful services to assist in container deployment and management. Specifically, TripleO is advantageous because of services like Ironic (the bare metal provisioning service) and Heat (the orchestration service), which provide a strong management backbone for your cloud. Also, containers are an integral piece of this system, as they provide a simple and granular way to perform lifecycle management for your cloud. From my work, it is clear that the cohesive relationship between containers and TripleO can create a new and improved avenue to deploy the cloud in a unique way to implement get your cloud working the way that you see fit.

TripleO is a fantastic project, and with the integration of containers I’m hoping to energize and continue building the community around the project. Using our integration as a proof of the project’s capabilities, we have shown that using TripleO provides an excellent management infrastructure underneath your cloud that allows for projects to be properly managed and grow.

 

[1]          https://github.com/stackforge/kolla/commit/dcb607d3690f78209afdf5868dc3158f2a5f4722

[2]          https://docs.docker.com/reference/commandline/cli/#restart-policies

[3]          https://github.com/stackforge/kolla/blob/master/docker/nova-compute/nova-compute-data/Dockerfile#L4-L5

[4]          https://www.rdoproject.org/Deploying_RDO_using_Instack

[5]          http://www.projectatomic.io/

[6]          https://github.com/rabi/heat-templates/blob/boot-config-atomic/hot/software-config/heat-docker-agents/Dockerfile

[7]          http://git.openstack.org/cgit/openstack/TripleO-puppet-elements/tree/elements/puppet-modules/source-repository-puppet-modules

[8]          https://blueprints.launchpad.net/ironic/+spec/whole-disk-image-support

[9]          https://github.com/stackforge/kolla/commit/08bd99a50fcc48539e69ff65334f8e22c4d25f6f

Survey: OpenStack users value portability, support, and complementary open source tools

75 percent of the respondents in a recent survey [1] conducted for Red Hat said that being able to move OpenStack workloads to different providers or platforms was important (ranked 4 or 5 out of 5)–and a mere 5 percent said that this question was of least importance. This was just one of the answers that highlighted a general desire to avoid proprietary solutions and lock-in.

For example, a minority (47 percent) said that differentiated vendor-specific management and other tooling was important while a full 75 percent said that support for complementary open source cloud management, operating system, and development tools was. With respect to management specifically, only 22 percent plan to use vendor-specific tools to manage their OpenStack environments. By contrast, a majority (51 percent) plan to use the tools built into OpenStack–in many cases complemented by open source configuration management (31 percent) and cloud management platforms (21 percent). It’s worth noting though that 42 percent of those asked about OpenStack management tools said that they were unsure/undecided, indicating that there’s still a lot of learning to go on with respect to cloud implementations in general.

This last point was reinforced by the fact that 68 percent said that the availability of training and services from the vendor to on-ramp their OpenStack project was important. (Red Hat offers a Certified System Administrator in Red Hat OpenStack certification as well as a variety of solutions to build clouds through eNovance by Red Hat.) 45 percent also cited lack of internal IT skills as a barrier to adopting OpenStack. Other aspects of commercial support were valued as well. For example, 60 percent said that hardware and software certifications are important and a full 82 percent said that production-level technical support was.

Continue reading “Survey: OpenStack users value portability, support, and complementary open source tools”

OPNFV Arno hits the streets

The first release of the OPNFV project, Arno, is now available. The release, named after the Italian river which flows through the city of Florence on its way to the Mediterranean Sea, is the result of significant industry collaboration, starting from the creation of the project in October 2014.

This first release establishes a strong foundation for us to work together to create a great platform for NFV. We have multiple hardware labs, running multiple deployments of OpenStack and OpenDaylight, all deployed with one-step, automated deployment tools. A set of automated tests validate that deployments are functional, and provide a framework for the addition of other tests in the future. Finally, we have a good shared understanding of the problem space, and have begun to engage with upstream projects like OpenDaylight and OpenStack to communicate requirements and propose feature additions to satisfy them.

A core value of OPNFV is “upstream first” – the idea that changes required to open source projects for NFV should happen with the communities in those projects. This is a core value for Red Hat too, and we have been happy to take a leadership role in coordinating the engagement of OPNFV members in projects like OpenDaylight and OpenStack. Red Hat engineers Tim Rozet and Dan Radez have taken a leadership role in putting together one of the two deployment options for OPNFV Arno, the Foreman/Quickstack installer, based on CentOS, RDO and OpenDaylight packages created by another Red Hat engineer, Daniel Farrell. We have been proud to play a significant part, with other members of the OPNFV community, in contributing to this important mission.

Continue reading “OPNFV Arno hits the streets”

Public vs Private, Amazon compared to OpenStack

Public vs Private, Amazon Web Services EC2 compared to OpenStack®

How to choose a cloud platform and when to use both

The public vs private cloud debate is a path well trodden. While technologies and offerings abound, there is still confusion among organizations as to which platform is suited for their agile needs. One of the key benefits to a cloud platform is the ability to spin up compute, networking and storage quickly when users request these resources and similarly decommission when no longer required. Among public cloud providers, Amazon has a market share ahead of Google, Microsoft and others. Among private cloud providers, OpenStack® presents a viable alternative to Microsoft or VMware.

This article compares Amazon Web Services EC2 and OpenStack® as follows:

  • What technical features do the two platforms provide?
  • How do the business characteristics of the two platforms compare?
  • How do the costs compare?
  • How to decide which platform to use and how to use both

OpenStack® and Amazon Web Services (AWS) EC2 defined

From  OpenStack.org “OpenStack software controls large pools of compute, storage, and networking resources throughout a datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.”

From AWS “Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers..”

Technical comparison of OpenStack® and AWS EC2

The tables below name and briefly describe the feature in OpenStack® and AWS. 

Continue reading “Public vs Private, Amazon compared to OpenStack”

The Age of Cloud File Services

The new OpenStack Kilo upstream release that became available on April 30, 2015 marks a significant milestone for the Manila project for shared file system service for OpenStack with an increase in development capacity and extensive vendors adoption. This project was kicked off 3 years ago and became incubated during 2014 and now moves to the front of the stage at the upcoming OpenStack Vancouver Conference taking place this month with customer stories of Manila deployments in Enterprise and Telco environments.

storage-roomThe project was originally sponsored and accelerated by NetApp and Red Hat and has established a very rich community that includes code contribution fromcompanies such as EMC, Deutsche Telekom, HP, Hitachi, Huawei, IBM, Intel, Mirantis and SUSE.

The momentum of cloud shared file services is not limited to the OpenStack open source world. In fact, last month at the AWS Summit in San Francisco, Amazon announced it new Shared File Storage for Amazon EC2, The Amazon Elastic File System also known for EFS. This new storage service is an addition to the existing AWS storage portfolio, Amazon Simple Storage Service (S3) for object storage, Amazon Elastic Block Store (EBS) for block storage, and Amazon Glacier for archival, cold storage.

The Amazon EFS provides a standard file system semantics and is based on NFS v4 that allows the EC2 instances to access file system at the same time, providing a common data source for a wide variety of workloads and applications that are shared across thousands of instances. It is designed for broad range of use cases, such as Home directories, Content repositories, Development environments and big data applications. Data uploaded to EFS is automatically replicated to different availability zones, and because EFS file systems are SSD-based, there should be few latency and throughput related problems with the service. The Amazon EFS file system as a service allows users to create and configure file systems quickly with no minimum fee or setup cost, and customers pay only for the storage used by the file system based on elastic storage capacity that automatically grows and shrinks when adding and removing files on demand.

Continue reading “The Age of Cloud File Services”

What’s Coming in OpenStack Networking for the Kilo Release

KiloOpenStack  Kilo, the 11th release of the open source project, was officially released in April, and now is a good time to review some of the changes we saw in the OpenStack Networking (Neutron) community during this cycle, as well as some of the key new networking features introduced in the project.

Scaling the Neutron development community

The Kilo cycle brings two major efforts which are meant to better expand and scale the Neutron development community: core plugin decomposition and advanced services split. These changes should not directly impact OpenStack users but are expected to reduce code footprint, improve feature velocity, and ultimately bring faster innovation speed. Let’s take a look at each individually:

Neutron core plugin decomposition

Neutron, by design, has a pluggable architecture which offers a custom backend implementation of the Networking API. The plugin is a core piece of the deployment and acts as the “glue” between the logical API and the actual implementation. As the project evolves, more and more plugins were introduced, coming from open-source projects and communities (such as Open vSwitch and OpenDaylight), as well as from various vendors in the networking industry (like Cisco, Nuage, Midokura and others). At the beginning of the Kilo cycle, Neutron had dozens of plugins and drivers span from core plugins, ML2 mechanism drivers, L3 service plugins, and L4-L7 service plugins for FWaaS, LBaaS and VPNaaS – the majority of those included directly within the Neutron project repository. The amount of code required to review across those drivers and plugins was growing to the point where it was no longer scaling. The expectation that core Neutron reviewers review code which they had no knowledge of, or could not test due to lack of proper hardware or software setup, was not realistic. This also caused some frustration among the vendors themselves, who sometimes failed to get their plugin code merged on time.

Continue reading “What’s Coming in OpenStack Networking for the Kilo Release”

Driving in the Fast Lane – CPU Pinning and NUMA Topology Awareness in OpenStack Compute

The OpenStack Kilo release, extending upon efforts that commenced during the Juno cycle, includes a number of key enhancements aimed at improving guest performance. These enhancements allow OpenStack Compute (Nova) to have greater knowledge of compute host layout and as a result make smarter scheduling and placement decisions when launching instances. Administrators wishing to take advantage of these features can now create customized performance flavors to target specialized workloads including Network Function Virtualization (NFV) and High Performance Computing (HPC).

What is NUMA topology?

Historically, all memory on x86 systems was equally accessible to all CPUs in the system. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA).

In modern multi-socket x86 systems system memory is divided into zones (called cells or nodes) and associated with particular CPUs. This type of division has been key to the increasing performance of modern systems as focus has shifted from increasing clock speeds to adding more CPU sockets, cores, and – where available – threads. An interconnect bus provides connections between nodes, so that all CPUs can still access all memory. While the memory bandwidth of the interconnect is typically faster than that of an individual node it can still be overwhelmed by concurrent cross node traffic from many nodes. The end result is that while NUMA facilitates faster memory access for CPUs local to the memory being accessed, memory access for remote CPUs is slower.

Continue reading “Driving in the Fast Lane – CPU Pinning and NUMA Topology Awareness in OpenStack Compute”

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Walking Through the Implementation

In the previous blog post in this series we looked at what single root I/O virtualization (SR-IOV) networking is all about and we discussed why it is an important addition to Red Hat Enterprise Linux OpenStack Platform. In this second post we would like to provide a more detailed overview of the implementation, some thoughts on the current limitations, as well as what enhancements are being worked on in the OpenStack community.

Note: this post does not intend to provide a full end to end configuration guide. Customers with an active subscription are welcome to visit the official article covering SR-IOV Networking in Red Hat Enterprise Linux OpenStack Platform 6 for a complete procedure.

 

Setting up the Environment

In our small test environment we used two physical nodes: one serves as a Compute node for hosting virtual machine (VM) instances, and the other serves as both the OpenStack Controller and Network node. Both nodes are running Red Hat Enterprise Linux 7.

Continue reading “Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Walking Through the Implementation”