Red Hat Confirms Over 40+ Accepted Sessions at OpenStack Summit Barcelona

This Fall’s 2016 OpenStack Summit in Barcelona, Spain is gearing up to be a fulfilling event. After some challenging issues with the voting system (which prevented direct URLs to each session), the Foundation has posted the final session agenda detailing the entire week’s schedule of events. Once again, I am thrilled to see the voting results of the greater community with Red Hat sharing over 40 sessions of technology overview and deep dive’s around OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much more. 

As a Premiere sponsor this Fall, Red Hat also has a full day breakout room, where we plan to share additional product and strategy sessions. To learn more about Red Hat’s general accepted sessions, have a look at the details below. We’ll add the agenda details of our breakout soon! Also, be sure to visit us at our Marketplace booth to meet the team and check out one of our live demonstrations. The Marketplace kicks off on Monday evening during the booth crawl, 5:00 – 7:00pm. Finally, we’ll have several Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.

And in case you haven’t registered yet, visit our landing page for a discounted registration code to help get you to the event. We look forward to seeing you all again in Spain this October!

For more details on each session, click on the title below:

Continue reading “Red Hat Confirms Over 40+ Accepted Sessions at OpenStack Summit Barcelona”

Learn what’s coming in OpenStack “Mitaka”

As the fastest growing open source project in history, OpenStack releases fairly rapidly, with new releases twice per year. Each time, around April and October of every year, a whole plethora of new features and functions move from incubated development status to fully-baked features and accepted into the “core” OpenStack release. Rapidly approaching is the new “Mitaka” release, the 13th release of OpenStack, filled with some great new features.

To best share all the updates, we’ve put together a webinar to explain everything in much greater detail. These webinar’s provide you the opportunity to hear from our senior product managers, as well as ask questions about anything that might peak your own interest. To give you an idea of what exactly will be covered, here are some key highlights we’ll be talking about:

Compute

  • Support for Real-time KVM compute nodes and custom CPU thread policies for use by latency-sensitive NFV guest applications.
  • Improvements to the reliability of live migration to assist with application management and resiliency.
  • Progress update on Cells V2 implementation for improved scalability.

Storage

  • Support for rolling upgrades in Cinder, through backwards compatible RPC and versioned object pinning.
  • New Attached Volumes Extend API was introduced in Cinder, as well as new download/upload support for Cinder volumes in Glace repository.
  • New Disaster Recovery Share-Replication API support in Manila and improved Cinder Replication v2.1 API.

Networking

  • Continuing the work on distributed virtual routers (DVR)
  • Tenant resources cleanup
  • Improved Security Groups performance

In addition we’ll be sure to cover the state of key emerging projects including Barbican, Freezer, Manila, and Magnum, and provide some initial thoughts on what we might expect to see as we look forward to the “Newton” release cycle.

Don’t miss this “What’s New” update about the Mitaka release from two of our senior product managers, Steve Gordon and Sean Cohen. To learn more and register for this webinar, please be sure to  register here.

 

Red Hat confirms over 35 sessions at OpenStack Summit, Austin – Have a look!

As this Spring’s 2016 OpenStack Summit in Austin, TX nears, the Foundation has posted the final session agenda, outlining the week’s schedule of events. I am pleased to see that based on your voting, Red Hat continues to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in. With the expectation of the largest attendee crowd yet, and some exciting advancements around containers, storage, networking, compute, and more, we look forward to sharing the 35+ generally accepted sessions, workshops, and BoFs that will be included in the weeks agenda.

Red Hat is a Headline sponsor in Austin this Spring and along with the general sessions, workshops, and breakout track, Chris Wright, VP of Software Engineering, will be giving a keynote presentation and update on our OpenStack technologies on Monday April 25th during the main keynote session segment, between 9:00-10:45am. If you’re planning to attend Jonathan’s main keynote on Monday, you’ll be able to catch Chris’s keynote as well. To learn more about Red Hat’s accepted sessions, have a look at the details below. Be sure to visit us at the below sessions, at our booth in the Marketplace, which starts on Monday evening during the booth crawl, 6-7:30pm, or come by and see us for some beer and sausage on Tuesday evening for the evening party event on Rainey St. Either way, we look forward to seeing you in Austin, Texas this April!

For more details on each session, click on the title below:

Continue reading “Red Hat confirms over 35 sessions at OpenStack Summit, Austin – Have a look!”

OpenStack Summit Tokyo – Day 3

Hello again from Tokyo, Japan where the third and final day of OpenStack Summit has come to a close. As with the previous days of the event, there was plenty of news, interesting sessions, great discussions on the show floor, and more. All would likely agree that the 11th OpenStack Summit was a rousing overall success!

Like day 1 and day 2 of the event, Red Hat led or co-presented in several sessions.  Starting us off today, Erwan Gallen, Red Hat’s OpenStack Technical Architect, participated in a panel and helped provide an Ambassador community report. Among other things, the group of OpenStack ambassadors introduced several improvements over the past six months, since the last OpenStack community release (Kilo), and shared many of their overall feelings and experiences about the community.

Mark McLoughlin, Red Hat’s OpenStack Technical Director, then gave an interesting talk entitled The Life and Times of an OpenStack Virtual Machine. Delving deeper than a simple, abstract narrative of initiating a Launch Instance in OpenStack’s dashboard, Mark detailed the technologies involved behind the scenes to allow for this. By the end of the session he had fully explained how OpenStack provides a running VM that a user can access via SSH.

Continue reading “OpenStack Summit Tokyo – Day 3”

OpenStack Summit Tokyo – Day 2

Hello again from Tokyo, Japan where the second day of OpenStack Summit has come to a close with plenty of news, interesting sessions, great discussion on the showfloor, and more.

Day two started this morning with a keynote session from Mark Collier, Chief Operating Officer of the OpenStack Foundation. Mark shared several statistics and details from the newly launched Liberty release, including the fact that Neutron had overtaken Nova as the OpenStack project with the most activity. He was followed by Kyle Mestery, the project team lead (PTL) for Neutron, who provided further details and also gave an update on Project Kuryr, a service that brings container networking to Neutron. Toshio Nishiyama, SVP of NTT Resonant, then explained how NTT uses OpenStack to power their popular Goo search engine and web portal, the third largest in Japan. Additional keynotes were delivered by Scott Crenshaw and Adrian Otto of Rackspace, Kang-Wong Lee from SK Telecom, Kentaro Sasaki and Neal Sato from Rakuten, Makato Hasegawa from CyberAgent, Inc., and Angel Diaz from IBM and Jesse Proudman from Blue Box, an IBM company who teamed to deliver the final keynote of the morning.

Continue reading “OpenStack Summit Tokyo – Day 2”

OpenStack Summit Tokyo – Day 1

Kon’nichiwa from Tokyo, Japan where the 11th semi-annual OpenStack Summit is officially underway! This event has come a long way from its first gathering, more than five years ago, where 75 people gathered in Austin, Texas to learn about OpenStack in its infancy. That’s a sharp contrast with the 5,000+ people in attendance here in what marks Asia’s second OpenStack Summit.

The event kicked off this morning with Jonathan Bryce, the Executive Director of the OpenStack Foundation welcoming the crowd to the largest OpenStack Summit ever outside of North America. He was then followed on stage by technologists from various organizations focusing on real-world use cases, including Egle Sigler from Rackspace, and Takuya Ito from Yahoo, who shared their experience and use case with OpenStack at Yahoo Japan.

Continue reading “OpenStack Summit Tokyo – Day 1”

OpenStack Summit Tokyo – Day 0 (Pre-event)

I’ve always enjoyed traveling to Tokyo, Japan, as the people are always so friendly and willing to help. Whether it’s finding my way through the Narita airport or just trying to find a place to eat, they’re always willing to help – even with the language barrier. And each time I visit, I see something new, learn another word (or two) in Japanese, and it all just seems new and exciting all over again. Add in the excitement and buzz of an OpenStack Summit and you’ve got a great week in Tokyo!

Since the official start of the OpenStack Summit is Tuesday, we’re mostly spending the day on Monday setting up and getting ready. The Red Hat team is working diligently to setup the booth, preparing our product demonstrations, and our session speakers are putting the final touches on their presentations.

Continue reading “OpenStack Summit Tokyo – Day 0 (Pre-event)”

Red Hat and Lenovo: More Choice, More Clouds

In August 2015, we released Red Hat Enterprise Linux OpenStack Platform 7, bringing some of the latest innovations of OpenStack to the enterprise in a hardened, production-ready, and simple-to-deploy solution. The latest version of Red Hat’s Infrastructure-as-a-Service (IaaS) offering, based on the security and reliability of the world’s leading enterprise Linux platform, Red Hat Enterprise Linux OpenStack Platform 7 delivers a host of enhancements, including:

  • A new orchestration and management tool, Red Hat Enterprise Linux OpenStack Platform director;
  • Improved network flexibility with Neutron;
  • Enhanced object and block storage functionality with integrated Red Hat Ceph Storage Server; and
  • A fully supported bare-metal deployment service (Ironic).

Continue reading “Red Hat and Lenovo: More Choice, More Clouds”

Red Hat Confirms Speaking Sessions at OpenStack Summit Tokyo

As this Fall’s OpenStack Summit in Tokyo approaches, the Foundation has posted the session agenda, outlining the final schedule of events. I am happy to report that Red Hat has nearly 20 sessions that will be included in the weeks agenda, along with a few more as waiting alternates. With the limited space and shortened event this time around, I am please to see that Red Hat continues to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in.

Red Hat is a Premiere sponsor in Tokyo this Fall and will have a dedicated sponsor presentation, along with our general accepted sessions. To learn more about Red Hat’s accepted sessions, have a look at the details below. Be sure to visit us at the below sessions and at our booth (P7). We look forward to seeing you in Tokyo in October!

For more details on each session, click on the title below:

Continue reading “Red Hat Confirms Speaking Sessions at OpenStack Summit Tokyo”

Containerize OpenStack with Docker

Written by: Ryan Hallisey

Today in the cloud space, a lot of buzz in the market stems from Docker and providing support for launching containers on top of an existing platform. However, what is often overlooked is the use of Docker to improve deployment of the infrastructure platforms themselves; in other words, the ability to ship your cloud in containers.


Hallisay-fig1

 

Ian Main and I took hold of a project within the OpenStack community to address this unanswered question: Project Kolla. Being one of the founding members and core developers for the project, I figured we should start by using Kolla’s containers to get this work off the ground. We began by deploying containers one by one in an attempt to get a functioning stack. Unfortunately, not all of Kolla’s containers were in great shape and they were being deployed by Kubernetes. First, we decided to get the containers working, then deal with how they’re managed later. In the short term, we used a bash script to launch our containers, but it got messy as Kubernetes was opening up ports to the host and declaring environment variables for the containers, and we needed to do the same. Eventually, we upgraded the design to use an environment file that was populated by a script, which proved to be more effective. This design was adopted by Kolla and is still being used today[1].

With our setup script intact, we started a hierarchical descent though the OpenStack services, starting with MariaDB, RabbitMQ, and Keystone. Kolla’s containers were in great shape for these three services, and we were able to get them working relatively quickly. Glance was next, and it proved to be quite a challenge. Quickly, we learned that the Glance API container and Keystone were causing one another to fail.

Hallisay-fig1

 

The culprit was that Glance API and Keystone containers were racing to see which could create the admin user first. Oddly enough, these containers worked with Kubernetes, but I then realized Kubernetes restarts containers until they succeed, avoiding the race conditions we were seeing. To get around this, we made Glance and the rest of the services wait for Keystone to be active before they start. Later, we pushed this design into Kolla, and learned that Docker has a restart flag that will force containers to restart if there is an error.[2] We added the restart flag to our design so that containers will be independent of one another.

The most challenging service to containerize was Nova. Nova presented a unique challenge not only because it was made up of the most number of containers, but because it required the use of super privileged containers. We started off using Kolla’s containers, but quickly learned there were many components missing. Most significantly, the Nova Compute and Libvirt containers were not mounting the correct host’s directories, exposing us to one of the biggest hurdles when containerizing Nova: persistent data and making sure instances still exist after you kill the container. In order for that to work, Nova Compute and Libvirt needed to mount /var/lib/nova and /var/lib/libvirt from the host into the container. That way, the data for the instances is stored on the host and not in the container[3].

 

echo Starting nova compute

docker run -d –privileged \

            –restart=always \

            -v /sys/fs/cgroup:/sys/fs/cgroup \

            -v /var/lib/nova:/var/lib/nova \

            -v /var/lib/libvirt:/var/lib/libvirt \

            -v /run:/run \

            -v /etc/libvirt/qemu:/etc/libvirt/qemu \

            –pid=host –net=host \

            –env-file=openstack.env kollaglue/centos-rdo-nova-compute-nova:latest

 

A second issue we encountered when trying to get the Nova Compute container working was that we were using an outdated version of Nova. The Nova Compute container was using Fedora 20 packages, while the other services were using Fedora 21. This was our first taste of having to do an upgrade using containers. To fix the problem, all we had to do was change where Docker pulled the packages from and rebuild the container, effectively a one line change in the Dockerfile:

From Fedora:20

MAINTAINER Kolla Project (https://launchpad.net/kolla)

To

From Fedora:21

MAINTAINER Kolla Project (https://launchpad.net/kolla)

OpenStack services have independent lifecycles making it difficult to perform rolling upgrades and downgrades. Containers can bridge this gap by providing an easy way to handle upgrading and downgrading your stack.

Once we completed our maintenance on the Kolla containers, we turned our focus to TripleO[4]. TripleO is a project in the OpenStack community that aims to install and manage OpenStack. The name TripleO means OpenStack on OpenStack, where it deploys a so called undercloud, and uses that OpenStack setup to deploy an overcloud, also known as the user cloud.

Our goal was to use the undercloud to deploy a containerized overcloud on bare metal. In our design, we chose to deploy our overcloud on top of Red Hat Enterprise Linux Atomic Host[5]. Atomic is a bare bones Red Hat Enterprise Linux-based operating system that is designed to run containers. This was a perfect fit because it’s a bare and simple environment with nice set of tools for launching containers.

 

[heat-admin@t1-oy64mfeu2t3-0-zsjhaciqzvxs-controller-twdtywfbcxgh ~]$ atomic –help

Atomic Management Tool

positional arguments:

{host,info,install,stop,run,uninstall,update}

commands

host                            execute Atomic host commands

info                             display label information about an image

install                          execute container image install method

stop                            execute container image stop method

run                               execute container image run method

uninstall                      execute container image uninstall method

update                        pull latest container image from repository

optional arguments:

-h, –help                  show this help message and exit

 

Next, we had help from Rabi Mishra in creating a Heat hook that would allow Heat to orchestrate container deployment. Since we’re on Red Hat Enterprise Linux Atomic Host, the hook was running in a container and it would start the heat agents; thus allowing for heat to communicate with Docker[6]. Now we had all the pieces we needed.

In order to integrate our container work with TripleO, it was best for us to copy Puppet’s overcloud deployment implementation and apply our work to it. For our environment, we used devtest, the TripleO developer environment, and started to build a new Heat template. One of the biggest differences between using containers and Puppet, was that Puppet required a lot of setup and config to make sure dependencies were resolved and services were being properly configured. We didn’t need any of that. With Puppet, the dependency list looked like[7]:

 

puppetlabs-apache
puppet-ceph

44 packages later…

puppet-openstack_extras
puppet-tuskar

 

With Docker, we were able to replace all of that with:

 

atomic install kollaglue/centos-rdo-<service>

 

We were able to use a majority of the existing environment, but now starting services was significantly simplified.

Unfortunately, we were unable to get results for some time because we struggled to deploy a bare metal Red Hat Enterprise Linux Atomic Host instance. After consulting Lucas Gomes on Red Hat’s Ironic (bare metal deployment service) team, we learned that there was an easier way to accomplish what we were trying to do. He pointed us in the direction of a new feature in Ironic that added support for full image deployment[8]. Although there was a bug in Ironic when using the new feature, we fixed it and started to see our Red Hat Enterprise Linux Atomic Host running. Now that we were past this, we could finally create images and add users, but Nova Compute and Libvirt didn’t work. The problem was that Red Hat Enterprise Linux Atomic Host wasn’t loading the kernel modules for kvm. On top of that, Libvirt needed proper permission to access /dev/kvm and wasn’t getting it.

 

#!/bin/sh

 

chmod 660 /dev/kvm

            chown root:kvm /dev/kvm


echo “Starting libvirtd.”
exec /usr/sbin/libvirtd

 

Upon fixing these issues, we could finally spawn instances. Later, these changes were adopted by Kolla because they represented a unique case that could cause Libvirt to fail[9].

To summarize, we created a containerized OpenStack solution inside of the TripleO installer project, using the containers from the Kolla project. We mirrored the TripleO workflow by using the undercloud (management cloud) to deploy most of the core services in the overcloud (user cloud), but now those services are containerized. The services we used were Keystone, Glance, and Nova; with services like Neutron, Cinder, and Heat soon to follow. Our new solution uses Heat (the orchestration service) to deploy the containerized OpenStack services onto Red Hat Enterprise Linux Atomic Host, and has the ability to plug right into the TripleO-heat-templates. Normally, Puppet is used to deploy an overcloud, but now we’ve proven you can use containers. What’s really unique about this, is that now you can shop for your config in the Docker Registry instead of having to go through Puppet to setup your services. This allows for you to pull down a container where your services come with the configuration you need. Through our work, we have shown that containers are an alternative deployment method within TripleO that can simplify deployment and add choice about how your cloud is installed.

The benefits of using Docker in a regular application are the same as having your cloud run in containers; reliable, portable, and easy life cycle management. With containers, lifecycle management greatly improves TripleO’s existing solution. The upgrading and downgrading process of an OpenStack service becomes far simpler; creating faster turnaround times so that your cloud is always running the latest and greatest. Ultimately, this solution provides an additional method within TripleO to manage the cloud’s upgrades and downgrades, supplementing the solution TripleO currently offers.

Overall, integrating with TripleO works really well because OpenStack provides powerful services to assist in container deployment and management. Specifically, TripleO is advantageous because of services like Ironic (the bare metal provisioning service) and Heat (the orchestration service), which provide a strong management backbone for your cloud. Also, containers are an integral piece of this system, as they provide a simple and granular way to perform lifecycle management for your cloud. From my work, it is clear that the cohesive relationship between containers and TripleO can create a new and improved avenue to deploy the cloud in a unique way to implement get your cloud working the way that you see fit.

TripleO is a fantastic project, and with the integration of containers I’m hoping to energize and continue building the community around the project. Using our integration as a proof of the project’s capabilities, we have shown that using TripleO provides an excellent management infrastructure underneath your cloud that allows for projects to be properly managed and grow.

 

[1]          https://github.com/stackforge/kolla/commit/dcb607d3690f78209afdf5868dc3158f2a5f4722

[2]          https://docs.docker.com/reference/commandline/cli/#restart-policies

[3]          https://github.com/stackforge/kolla/blob/master/docker/nova-compute/nova-compute-data/Dockerfile#L4-L5

[4]          https://www.rdoproject.org/Deploying_RDO_using_Instack

[5]          http://www.projectatomic.io/

[6]          https://github.com/rabi/heat-templates/blob/boot-config-atomic/hot/software-config/heat-docker-agents/Dockerfile

[7]          http://git.openstack.org/cgit/openstack/TripleO-puppet-elements/tree/elements/puppet-modules/source-repository-puppet-modules

[8]          https://blueprints.launchpad.net/ironic/+spec/whole-disk-image-support

[9]          https://github.com/stackforge/kolla/commit/08bd99a50fcc48539e69ff65334f8e22c4d25f6f

  • Page 1 of 2
  • 1
  • 2
  • >