Containerize OpenStack with Docker

Written by: Ryan Hallisey

Today in the cloud space, a lot of buzz in the market stems from Docker and providing support for launching containers on top of an existing platform. However, what is often overlooked is the use of Docker to improve deployment of the infrastructure platforms themselves; in other words, the ability to ship your cloud in containers.


Hallisay-fig1

 

Ian Main and I took hold of a project within the OpenStack community to address this unanswered question: Project Kolla. Being one of the founding members and core developers for the project, I figured we should start by using Kolla’s containers to get this work off the ground. We began by deploying containers one by one in an attempt to get a functioning stack. Unfortunately, not all of Kolla’s containers were in great shape and they were being deployed by Kubernetes. First, we decided to get the containers working, then deal with how they’re managed later. In the short term, we used a bash script to launch our containers, but it got messy as Kubernetes was opening up ports to the host and declaring environment variables for the containers, and we needed to do the same. Eventually, we upgraded the design to use an environment file that was populated by a script, which proved to be more effective. This design was adopted by Kolla and is still being used today[1].

With our setup script intact, we started a hierarchical descent though the OpenStack services, starting with MariaDB, RabbitMQ, and Keystone. Kolla’s containers were in great shape for these three services, and we were able to get them working relatively quickly. Glance was next, and it proved to be quite a challenge. Quickly, we learned that the Glance API container and Keystone were causing one another to fail.

Hallisay-fig1

 

The culprit was that Glance API and Keystone containers were racing to see which could create the admin user first. Oddly enough, these containers worked with Kubernetes, but I then realized Kubernetes restarts containers until they succeed, avoiding the race conditions we were seeing. To get around this, we made Glance and the rest of the services wait for Keystone to be active before they start. Later, we pushed this design into Kolla, and learned that Docker has a restart flag that will force containers to restart if there is an error.[2] We added the restart flag to our design so that containers will be independent of one another.

The most challenging service to containerize was Nova. Nova presented a unique challenge not only because it was made up of the most number of containers, but because it required the use of super privileged containers. We started off using Kolla’s containers, but quickly learned there were many components missing. Most significantly, the Nova Compute and Libvirt containers were not mounting the correct host’s directories, exposing us to one of the biggest hurdles when containerizing Nova: persistent data and making sure instances still exist after you kill the container. In order for that to work, Nova Compute and Libvirt needed to mount /var/lib/nova and /var/lib/libvirt from the host into the container. That way, the data for the instances is stored on the host and not in the container[3].

 

echo Starting nova compute

docker run -d –privileged \

            –restart=always \

            -v /sys/fs/cgroup:/sys/fs/cgroup \

            -v /var/lib/nova:/var/lib/nova \

            -v /var/lib/libvirt:/var/lib/libvirt \

            -v /run:/run \

            -v /etc/libvirt/qemu:/etc/libvirt/qemu \

            –pid=host –net=host \

            –env-file=openstack.env kollaglue/centos-rdo-nova-compute-nova:latest

 

A second issue we encountered when trying to get the Nova Compute container working was that we were using an outdated version of Nova. The Nova Compute container was using Fedora 20 packages, while the other services were using Fedora 21. This was our first taste of having to do an upgrade using containers. To fix the problem, all we had to do was change where Docker pulled the packages from and rebuild the container, effectively a one line change in the Dockerfile:

From Fedora:20

MAINTAINER Kolla Project (https://launchpad.net/kolla)

To

From Fedora:21

MAINTAINER Kolla Project (https://launchpad.net/kolla)

OpenStack services have independent lifecycles making it difficult to perform rolling upgrades and downgrades. Containers can bridge this gap by providing an easy way to handle upgrading and downgrading your stack.

Once we completed our maintenance on the Kolla containers, we turned our focus to TripleO[4]. TripleO is a project in the OpenStack community that aims to install and manage OpenStack. The name TripleO means OpenStack on OpenStack, where it deploys a so called undercloud, and uses that OpenStack setup to deploy an overcloud, also known as the user cloud.

Our goal was to use the undercloud to deploy a containerized overcloud on bare metal. In our design, we chose to deploy our overcloud on top of Red Hat Enterprise Linux Atomic Host[5]. Atomic is a bare bones Red Hat Enterprise Linux-based operating system that is designed to run containers. This was a perfect fit because it’s a bare and simple environment with nice set of tools for launching containers.

 

[heat-admin@t1-oy64mfeu2t3-0-zsjhaciqzvxs-controller-twdtywfbcxgh ~]$ atomic –help

Atomic Management Tool

positional arguments:

{host,info,install,stop,run,uninstall,update}

commands

host                            execute Atomic host commands

info                             display label information about an image

install                          execute container image install method

stop                            execute container image stop method

run                               execute container image run method

uninstall                      execute container image uninstall method

update                        pull latest container image from repository

optional arguments:

-h, –help                  show this help message and exit

 

Next, we had help from Rabi Mishra in creating a Heat hook that would allow Heat to orchestrate container deployment. Since we’re on Red Hat Enterprise Linux Atomic Host, the hook was running in a container and it would start the heat agents; thus allowing for heat to communicate with Docker[6]. Now we had all the pieces we needed.

In order to integrate our container work with TripleO, it was best for us to copy Puppet’s overcloud deployment implementation and apply our work to it. For our environment, we used devtest, the TripleO developer environment, and started to build a new Heat template. One of the biggest differences between using containers and Puppet, was that Puppet required a lot of setup and config to make sure dependencies were resolved and services were being properly configured. We didn’t need any of that. With Puppet, the dependency list looked like[7]:

 

puppetlabs-apache
puppet-ceph

44 packages later…

puppet-openstack_extras
puppet-tuskar

 

With Docker, we were able to replace all of that with:

 

atomic install kollaglue/centos-rdo-<service>

 

We were able to use a majority of the existing environment, but now starting services was significantly simplified.

Unfortunately, we were unable to get results for some time because we struggled to deploy a bare metal Red Hat Enterprise Linux Atomic Host instance. After consulting Lucas Gomes on Red Hat’s Ironic (bare metal deployment service) team, we learned that there was an easier way to accomplish what we were trying to do. He pointed us in the direction of a new feature in Ironic that added support for full image deployment[8]. Although there was a bug in Ironic when using the new feature, we fixed it and started to see our Red Hat Enterprise Linux Atomic Host running. Now that we were past this, we could finally create images and add users, but Nova Compute and Libvirt didn’t work. The problem was that Red Hat Enterprise Linux Atomic Host wasn’t loading the kernel modules for kvm. On top of that, Libvirt needed proper permission to access /dev/kvm and wasn’t getting it.

 

#!/bin/sh

 

chmod 660 /dev/kvm

            chown root:kvm /dev/kvm


echo “Starting libvirtd.”
exec /usr/sbin/libvirtd

 

Upon fixing these issues, we could finally spawn instances. Later, these changes were adopted by Kolla because they represented a unique case that could cause Libvirt to fail[9].

To summarize, we created a containerized OpenStack solution inside of the TripleO installer project, using the containers from the Kolla project. We mirrored the TripleO workflow by using the undercloud (management cloud) to deploy most of the core services in the overcloud (user cloud), but now those services are containerized. The services we used were Keystone, Glance, and Nova; with services like Neutron, Cinder, and Heat soon to follow. Our new solution uses Heat (the orchestration service) to deploy the containerized OpenStack services onto Red Hat Enterprise Linux Atomic Host, and has the ability to plug right into the TripleO-heat-templates. Normally, Puppet is used to deploy an overcloud, but now we’ve proven you can use containers. What’s really unique about this, is that now you can shop for your config in the Docker Registry instead of having to go through Puppet to setup your services. This allows for you to pull down a container where your services come with the configuration you need. Through our work, we have shown that containers are an alternative deployment method within TripleO that can simplify deployment and add choice about how your cloud is installed.

The benefits of using Docker in a regular application are the same as having your cloud run in containers; reliable, portable, and easy life cycle management. With containers, lifecycle management greatly improves TripleO’s existing solution. The upgrading and downgrading process of an OpenStack service becomes far simpler; creating faster turnaround times so that your cloud is always running the latest and greatest. Ultimately, this solution provides an additional method within TripleO to manage the cloud’s upgrades and downgrades, supplementing the solution TripleO currently offers.

Overall, integrating with TripleO works really well because OpenStack provides powerful services to assist in container deployment and management. Specifically, TripleO is advantageous because of services like Ironic (the bare metal provisioning service) and Heat (the orchestration service), which provide a strong management backbone for your cloud. Also, containers are an integral piece of this system, as they provide a simple and granular way to perform lifecycle management for your cloud. From my work, it is clear that the cohesive relationship between containers and TripleO can create a new and improved avenue to deploy the cloud in a unique way to implement get your cloud working the way that you see fit.

TripleO is a fantastic project, and with the integration of containers I’m hoping to energize and continue building the community around the project. Using our integration as a proof of the project’s capabilities, we have shown that using TripleO provides an excellent management infrastructure underneath your cloud that allows for projects to be properly managed and grow.

 

[1]          https://github.com/stackforge/kolla/commit/dcb607d3690f78209afdf5868dc3158f2a5f4722

[2]          https://docs.docker.com/reference/commandline/cli/#restart-policies

[3]          https://github.com/stackforge/kolla/blob/master/docker/nova-compute/nova-compute-data/Dockerfile#L4-L5

[4]          https://www.rdoproject.org/Deploying_RDO_using_Instack

[5]          http://www.projectatomic.io/

[6]          https://github.com/rabi/heat-templates/blob/boot-config-atomic/hot/software-config/heat-docker-agents/Dockerfile

[7]          http://git.openstack.org/cgit/openstack/TripleO-puppet-elements/tree/elements/puppet-modules/source-repository-puppet-modules

[8]          https://blueprints.launchpad.net/ironic/+spec/whole-disk-image-support

[9]          https://github.com/stackforge/kolla/commit/08bd99a50fcc48539e69ff65334f8e22c4d25f6f

OpenStack Summit Vancouver: Agenda Confirms 40+ Red Hat Sessions

As this Spring’s OpenStack Summit in Vancouver approaches, the Foundation has now posted the session agenda, outlining the final schedule of events. I am very pleased to report that Red Hat and eNovance have more than 40 approved sessions that will be included in the weeks agenda, with a few more approved as joint partner sessions, and even a few more as waiting alternates.

This vote of confidence confirms that Red Hat and eNovance continue to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in and concerned with.

Red Hat is also a headline sponsor in Vancouver this Spring, along with Intel, SolidFire, and HP, and will have a dedicated keynote presentation, along with the 40+ accepted sessions. To learn more about Red Hat’s accepted sessions, have a look at the details below. Be sure to visit us at the below sessions and at our booth (#H4). We look forward to seeing you in Vancouver in May!

For more details on each session, click on the title below:

Continue reading “OpenStack Summit Vancouver: Agenda Confirms 40+ Red Hat Sessions”

Accelerating OpenStack adoption: Red Hat Enterprise Linux OpenStack Platform 6!

On Tuesday February 17th, we announced the general availability of Red Hat Enterprise Linux OpenStack Platform 6, Red Hat’s fourth release of the commercial OpenStack offering to the market.

Based on the community OpenStack “Juno” release and co-engineered with Red Hat Enterprise Linux 7, the enterprise-hardened Version 6 is aimed at accelerating the adoption of OpenSack among enterprise businesses, telecommunications companies, Internet service providers (ISPs), and public cloud hosting providers.

Since the first version released in July 2013, the “design principles” of Red Hat Enterprise Linux OpenStack Platform product offering are:

Continue reading “Accelerating OpenStack adoption: Red Hat Enterprise Linux OpenStack Platform 6!”

OpenStack Summit Paris: Agenda Confirms 22 Red Hat Sessions

As this Fall’s OpenStack Summit in Paris approaches, the Foundation has posted the session agenda, outlining the schedule of events. With an astonishing 1,100+ sessions submitted for review, I was happy to see that Red Hat and eNovance have a combined 22 sessions that are included in the weeks agenda, with two more as alternates.

As I’ve mentioned in the past, I really respect the way the Foundation goes about setting the agenda – essentially deferring to the attendees and participants themselves, via a vote. Through this voting process, the subjects that are “top-of-mind” and of most interest in learning more about are brought to the surface, resulting in a very current and cutting edge set of discussions. And with so many accepted sessions, it again confirms that Red Hat, and now eNovance, are involved in some of the most current projects and technologies that the community is most interested in.

Continue reading “OpenStack Summit Paris: Agenda Confirms 22 Red Hat Sessions”

Juno Updates – Security

Written by Nathan Kinder

 

There is a lot of development work going on in Juno in security related areas. I thought it would be useful to summarize what I consider to be some of the more notable efforts that are under way in the projects I follow.

Keystone

Nearly everyone I talk with who is using Keystone in anger is integrating it with an existing identity store such as an LDAP server. Using the SQL identity backend is really a poor identity management solution, as it only supports basic password authentication, there is lack of password policy support, and the user management capabilities are fairly limited. Configuring Keystone to use an existing identity store has it’s challenges, but some of the changes in Juno should make this easier. In Icehouse and earlier, Keystone can only use one single identity backend. This means that all regular users and service users must exist in the same identity backend. In many real-world scenarios, the LDAP server used for users and credentials is considered to be read-only by anything other than the normal user provisioning tools. A common problem is that the OpenStack service users are not wanted in the LDAP server. In Juno, it will be possible to configure Keystone to use multiple identity backends. This will allow a deployment to use an LDAP server for normal users and the SQL backend for service users. In addition, this should allow multiple LDAP servers to be used by a single Keystone instance when using Keystone Domains (which previously only worked with the SQL identity backend).

Continue reading “Juno Updates – Security”

Session Voting Now Open, for OpenStack Summit Paris!

The voting polls for speaking sessions at this Fall’s OpenStack Summit in Paris, France are now open to the public. This time around it seems Red Hatters are looking to participate in more sessions then any previous Summit, helping to share innovation happening at Red Hat and in the greater community.

With an incredible quantity of sessions submitted this Summit, we’ve got quite a diverse selection for you to vote on. Spanning from low-level core compute, networking, and storage sessions, to plenty of customer success stories and lessons learned.


Each and every vote counts, so please have a look through the Red Hat submitted sessions below and vote for your favorites! If you’re new to the voting process, you must sign up for a free OpenStack Foundation member username and cast your votes. Visit the foundation site here, to sign up for free!

Once you’ve signed up as a member, click on the titles below to cast your vote. Remember, voting closes on Wednesday August 6th.

Have a look at our sessions here and cast your vote! I’ve sorted by category:

Storage

  1. OpenStack Storage APIs and Ceph: Existing Architectures and Future Features
  2. Deployment Best Practices for OpenStack Software-Defined Storage with Ceph
  3. What’s New in Ceph?
  4. OpenStack and Ceph – Match Made in the Cloud
  5. Large Scale OpenStack Block Storage with Containerized Ceph
  6. Red Hat Training: Using Ceph and Red Hat Storage Server in Cinder
  7. Volume Retyping and Cinder Backend Configuring
  8. Using OpenStack Swift for Extreme Data Durability
  9. Ask the Experts: Challenges for OpenStack Storage
  10. Deploying Red Hat Block and Object Storage with Mellanox and Red Hat Enterprise Linux OpenStack Platform
  11. Vanquish Performance Bottlenecks and Deliver Resilient, Agile Infrastructure, with All Flash Storage and OpenStack
  12. GlusterFS: The Scalable Open Source Backend for Manila
  13. Delivering Elastic Big Data Analytics with OpenStack Sahara and Distributed Storage
  14. Deploying Swift on a Scale-Out File System

Continue reading “Session Voting Now Open, for OpenStack Summit Paris!”

OpenStack Summit Session Voting Closes Soon – Your Vote Counts!

With the voting polls open for the past week, the OpenStack Foundation is collecting votes for all sessions at this Spring’s OpenStack Summit in Atlanta. Red Hat is doing its part to contribute as many innovative and useful session to the agenda. With a variety of sessions submitted, from low-level discussions on network routing and storage, all the way through real-world success stories that share experiences and lessons learned with deploying an OpenStack cloud, we’ve got a great lineup to offer you.

Each and every vote counts, so if you haven’t already voted, have a look through all the Red Hat submitted sessions and vote for your favorites! Just click on the title to cast your vote. Remember, voting closes on Monday, March 3rd.

Continue reading “OpenStack Summit Session Voting Closes Soon – Your Vote Counts!”

  • Page 2 of 2
  • <
  • 1
  • 2