What’s new in OpenStack Liberty: webinar recap

by Steve Gordon, Sr. Technical Product Manager, Red Hat — October 7, 2015
and Sean Cohen, Principal Technical Product Manager, Red Hat

OpenStack “Liberty,” due for imminent release, represents the 12th release of the open source computing platform for public and private clouds. Recent OpenStack releases have focused on improving stability and enhancing the operator experience. This is still the case with Liberty, but there are still new features to consider.

On October 1st we provided a sneak peek into the highlights of OpenStack Liberty, if you missed out you can now view the recording of the event on demand. As well as providing an overview the highlights of the Liberty release we also discussed the recent restructure of the way governance of OpenStack projects works, colloquially referred to as the “big tent”, and what it means for you as a consumer of OpenStack.

We also spent some time covering projects that are less widely deployed at this time and what the future might hold for them including the Containers service (Magnum), the Shared File Systems service (Manila), and the Message service (Zaqar).


Features discussed in the “What’s new in OpenStack Liberty” webinar include:

    • Network quality of service (QoS): providing an extensible API and reference implementation for dynamically defining per-port and per-network QoS policies. This enables OpenStack tenant administrators to offer different service levels based on application needs and available bandwidth.
    • Role-based access control (RBAC) for networks: provides fine-grained permissions for sharing networks between tenants. Historically OpenStack networks were either shared between all tenants (public) or not shared at all (private). Liberty now allows a specific set of tenants to attach instances to a given network, or even to disable tenants from creating networks – instead limiting access to pre-created networks corresponding to their assigned project(s).
    • Mark host down API enhancements: supports external high-availability solutions, including pacemaker, in the event of compute node failure. This new API call provides improved instance resiliency by giving external tools a faster path to notifying OpenStack Compute of a failure and initiating evacuation.
    • Dashboard support for database-as-a-service (Trove): subnet allocation, floating IP assignment, and volume migration, will now be included and configurable through the graphical user interface (Horizon). Providing easier day to day operational management for  cloud users.
    • Generic volume migration: adds the ability to migrate workloads from iSCSI to non-iSCSI storage back ends, with more drivers to perform migration including Ceph RBD.
    • Volume Replication API: Cinder now allows block level replication between storage back ends. This simplifies OpenStack disaster recovery by allowing administrators to enable volume replication and failover.
    • Nondisruptive backups: Allows the backup of volumes while they are still attached to instances by performing the backup from a temporary attached snapshot. This eases backups for administrators and offers a less disruptive solution to end users.
    • New Image signing and encryption: helps to protect against image tampering by providing greater integrity with signing and signature validation of bootable images.
    • Convergence updates: Updates to OpenStack Orchestration (Heat) are aimed at making infrastructure updates easier to scale and more resilient to failures. As part of long-term work in this area, Liberty includes an (optional) mode for a persistent, per-resource state during stack updates. This provides improved fault tolerance, including the ability to recover from a failure in the orchestration engine during the update. In addition these changes provide the potential for work to be spread across multiple orchestration engine workers in a more granular way than was previously possible.
    • Experimental online schema changes: aimed at minimizing the amount of downtime required when applying database schema changes during the upgrade process. Further work on this feature planned for in the future will apply the required database migrations while still running (online).

For further information about the above features, watch the presentation online or review the slides.


Highly available virtual machines in RHEL OpenStack Platform 7

by Steve Gordon, Sr. Technical Product Manager, Red Hat — September 24, 2015

OpenStack provides scale and redundancy at the infrastructure layer to provide high availability for applications built for operation in a horizontally scaling cloud computing environment. It has been designed for applications that are “designed for failure” and voluntarily excluded features that would enable traditional enterprise applications, in fear of limiting its’ scalability and corrupting its initial goals. These traditional enterprise applications demand continuous operation, and fast, automatic recovery in the event of an infrastructure level failure. While an increasing number of enterprises look to OpenStack as providing the infrastructure platform for their forward-looking applications they are also looking  to simplify operations by consolidating their legacy application workloads on it as well.

As part of the On-Ramp to Enterprise OpenStack program, Red Hat, in collaboration with Intel, Cisco and Dell, have been working on delivering a high availability solution for such enterprise workloads running on top of OpenStack. This work provides an initial implementation of the instance high availability proposal that we put forward in the past and is included in the recently released Red Hat Enterprise Linux OpenStack Platform 7.

In putting forward this original proposal it was posited that there are three key capabilities  to any solution endeavoring to provide workload high availability in a cloud or virtualization environment:

  • A monitoring capability to detect when a given compute node has failed and trigger handling of the failure.
  • A fencing capability to remove the relevant compute node from the environment.
  • A recovery capability to orchestrate the rescuing of instances from the failed compute node.

Rather than re-inventing the wheel inside the OpenStack projects themselves it is possible to deploy and manage an OpenStack environment with these capabilities using traditional high availability tools such as Pacemaker, without compromising the scalability aspect of the overall platform. This is the approach used to deliver instance-level high availability in RHEL OpenStack Platform 7. You can view a demonstration of the solution in action, as previously shown at Red Hat Summit in partnership with Dell and Intel, here:

In this implementation monitoring is performed using the NovaCompute pacemaker resource agent while fencing and recovery are handled by the fence_compute pacemaker fence agent and the NovaEvacuate resource agent. These three new components were all  co-engineered by the High Availability and OpenStack Compute teams at Red Hat and are provided in updated resource-agents and fence-agents packages for Red Hat Enterprise Linux 7.1.


In a traditional pacemaker deployment each node in a cluster  runs the full stack of services for ensuring high availability, including pacemaker and corosync. The traditional HA setup, as delivered via RHEL High Availability add-on, supports up to 16 nodes. In contrast a typical OpenStack deployment has many hundreds or even thousands, of compute nodes that need to be monitored. To close the scalability gap, the Red Hat HA team designed and developed, from the ground up, pacemaker_remote.

By using pacemaker_remote it is possible to continue adding compute nodes and connecting them to the Pacemaker cluster running on the OpenStack controller nodes without running into the 16 node limit, thus keeping all of the nodes in a single administrative domain. As a result the compute nodes do not become full members of the cluster and do not need to run full pacemaker, or corosync stacks, instead just running pacemaker_remote and integrating with the cluster as remote nodes.

This eases the process of scaling out the compute cluster while still allowing us to provide some neat functions in relation to providing high availability, including monitoring compute nodes for failures and automating recovery of the virtual machines running on them when failures occur. To do this the Pacemaker cluster running on the controller nodes monitors pacemaker_remoted on each compute node to confirm it is “alive”. In turn, on the compute node itself pacemaker_remoted monitors the state of a number of services including the Neutron and Ceilometer agents, Libvirt, and of course the nova-compute service itself. In the event of an issue being detected in one of these services pacemaker_remote will endeavour to recover it independently. In the event this fails however, or if pacemaker_remote stops responding entirely, fencing and recovery operations are triggered.


In the event that a compute node fails Pacemaker powers it off using fence_ipmilan (other fencing mechanisms will be supported in the future), while it is powering down the fence_compute fence agent loops waiting for Nova to also recognize that the failed host is down. This is necessary because OpenStack Compute (Nova) will not let an evacuation be initiated until it recognizes the node being evacuated is down. In the near future, it will be possible for the fence agent to use the force-down API call (formerly referred to as “mark host down”), introduced in OpenStack “Liberty”, to proactively tell Nova that the node is down and speed up this part of the process.


Once Nova has recognized that the node is down in response to either the original failure or Pacemaker explicitly powering the node off the fence agent initiates a call to Nova host-evacuate which triggers Nova to restart all of the virtual machines that were running on the failed compute node on a new one. In the future it may be desirable to have an image property or flavor extra specification that can be used to explicitly “opt in” to this functionality only for traditional application workloads that need it.

In this implementation we assume that impacted virtual machines are either using shared ephemeral storage, for example Ceph, or were booted from volumes. These characteristics make it possible to recover the instances, including their on-disk state, even when the host on which they were originally running has gone down permanently. An out of the box RHEL OpenStack Platform 7 deployment uses Ceph for this purpose.

If pacemaker_remote is also successful in powering the node back on then it will be returned to the pool of available compute resources when the Nova heartbeat process discovers its return to operation.

The combination of these monitoring, fencing, and recovery capabilities provide a solution that makes it easier than ever to migrate traditional, business-critical applications that require high availability to OpenStack.

Want to try it out for yourself? Sign-up for an evaluation of Red Hat Enterprise Linux OpenStack Platform today! Existing users can find instructions on manually enabling high availability for their compute nodes in the Red Hat Knowledgebase. We would love to get more feedback on this feature as we work on integrating these capabilities and more into the RHEL OpenStack Platform director (based on the “TripleO” project) to provide full automation.

Want to learn more about moving instances around an OpenStack environment? Don’t know the difference between cold migration, live migration, and evacuation? Catch my presentation – “Dude, this isn’t where I parked my instance!?” –  at OpenStack Summit Tokyo!

Analyzing the performance of Red Hat Enterprise Linux OpenStack Platform using Rally

by Roger Lopez - Principal Software Engineer — September 18, 2015
and Joe Talerico - Senior Performance Engineer

 In our recent blog post, we’ve discussed the steps involved in determining the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform environment. To recap, we’ve recommended the following:

  1. Validate the underlying hardware performance using AHC
  2. Deploy Red Hat Enterprise Linux OpenStack Platform
  3. Validate the newly deployed infrastructure using Tempest
  4. Run Rally with specific scenarios that stress the control plane of OpenStack environment
  5. Run CloudBench (cbtool) experiments that stress applications running in virtual machines within OpenStack environment

In this post, we would like to focus on step 4: Running Rally with a specific scenario to stress the control plane of the OpenStack environment. The main objectives are:

  1. Provide a brief introduction to Rally
  2. Provide a specific scenario used within the Guidelines and Considerations for Performance and Scaling of Red Hat Enterprise Linux OpenStack Platform 6-based cloud reference architecture
  3. Demonstrate how captured results lead to the tweaking of the HAProxy OpenStack parameter timeout value

What is Rally?

Rally is a benchmarking tool created to answer the underlying question of “How does OpenStack work at scale?”. Rally is able to answer this question by automating the processes that entail an OpenStack deployment, cloud verification, benchmarking, and profiling. While Rally has the capabilities to offer an assortment of actions to test and validate the OpenStack cloud, this blog focuses specifically on using the benchmarking tool to test a specific scenario using an existing Red Hat Enterprise Linux OpenStack Platform-based cloud and generate an HTML report based upon captured results.

Benchmarking with Rally

Rally runs different types of scenarios based on the information provided by a user defined .json file. While Rally has many scenarios to choose from, we are showing one key scenario that focuses on testing end-user usability of the RHEL OpenStack Platform-based cloud.  The scenario is called NovaServers.boot_server.

In order to create the user-defined .json file, an understanding of how to assign parameter values is required. The following example breaks down an existing .json file that runs the NovaServers.boot_server scenario.

A .json file consists of the following:

  • A curly bracket {, followed by the name of the Rally scenario, e.g. “NovaServers.boot_server”, followed by a colon : and bracket [. The syntax is critical when creating a .json file, otherwise the Rally task fails. Each value assigned requires a comma, unless it is the final argument in a section.args – that consists of parameters that are assigned user defined values. The most notable parameters include:
    • auto_assign_nic – The value can be set to true in which a random network is chosen. Otherwise, a network ID can be specified
    • flavor – The size of the guest instances to be created, e.g. “m1.small”
    • image – The name of the image file used for creating guest instances
    • quotas – Specification of quotas for the CPU cores, instances, and memory (ram). Setting a value of -1 for cores, instances, and ram allows for use of all the resources available within the RHEL OpenStack Platform 6 cloud
    • tenants – amount of total tenants to be created
    • users_per_tenant – amount of users to be created within each tenant
    • concurrency – amount of guest instances to run on each iteration
    • times – amount of iterations to perform
  • An ending bracket ] and curly bracket bracket  } are the required in the syntax of a .json file to properly close it.

When benchmarking with Rally, the initial objective is to use small values for times and concurrency parameters in order to diagnose any errors as quickly as possible. When creating a .json file, concurrency and times have static values that dictate the maximum number of guests to launch for a specified scenario. To overcome this limitation, the rally-wrapper.sh script (found within this reference architecture) is created.

The script increments the values of concurrency and times by a value specified as long as the success rate is met thus increasing the maximum number of running guests.

Below is an example of how to use Rally in a practical situation.

Initial boot-storm Rally Results

A good first step for stressing the control plane of a RHEL OpenStack Platform-based environment using Rally is to run boot-storm tests that attempt to launch as many guests as that environment can simultaneously handle. The initial results gathered by the rally-wrapper.sh showed 50 guests booting concurrently with a success rate of merely 66%, as shown on the screen capture below.

Low success rate necessitates further investigation of boot-storm results which yields the  following error:

We could see that the Connection aborted, but BadStatusLine error does not provide any definitive reasons as to why that happen. However, the above error suggests that we must investigate what is causing incoming client connection requests to be aborted. From a top-down approach, this lead us to investigating HAProxy module. HAProxy is a load-balancer that spreads incoming connection requests across multiple servers. The default timeout value of HAProxy within the RHEL OpenStack Platform-based reference environment is 30 seconds. With a low default HAProxy timeout value, it was determined that the timeout value of incoming connections is not sufficient to handle incoming Rally client connection requests due to Rally reusing client connections instead of creating new client connections. Rally has a default client timeout of 180 seconds in the /etc/rally/rally.conf file, thus the t HAProxy timeout value was increased from 30 seconds to 180 seconds to align with Rally’s client connection timeout value. As a result of this investigation Red Hat Bugzilla 1199568 has been filed against the low timeout value of HAProxy that produces a ConnectionError.

To address the above issues, several steps described below had to be taken. On the Provisioning node the common.pp script located within the /etc/puppet/environments/production/modules/quickstack/manifests/load_balancer/ directory was modified to change the value of “‘client 30s'” to “‘client 180s'” as shown.

Once the above changes have been propagated to each Controller node, run the following

puppet command on each Controller node for the changes to take effect.

With the HAProxy configuration changes, and rerunning the Rally boot-storm tests, the number of guests booting concurrently increased from 50 (with 66% success rate) to 170 (with 100% success rate). This effectively increased the maximum number of guests by more than 3x with no reported failures.


Benchmarking tools such as Rally and its scenarios play a key role in achieving optimal performance in a specified environment and can be pretty handy for troubleshooting. Familiarizing yourself with different arguments within the scenario, especially concurrency and times values since they control maximum number of guests to launch, could be quite useful. As we just demonstrated, tweaking these values allowed us to identify the low timeout value of HAProxy that precluded us from achieving an acceptable number of running guests. By modifying HAProxy value, we were able to achieve more than a 3x performance of  “out-of-the-box” RHEL OpenStack Platform environment and had no failures when launching guest instances.

Driving in the Fast Lane: Huge Page support in OpenStack Compute

by Steve Gordon, Sr. Technical Product Manager, Red Hat — September 15, 2015

In a previous “Driving in the Fast Lane” blog post we focused on optimization of instance CPU resources. This time around let’s take a dive into the handling of system memory, and more specifically configurable page sizes. We will reuse the environment from the previous post, but add huge page support to our performance flavor.

What are Pages?

Physical memory is segmented into a series of contiguous regions called pages. For efficiency instead of accessing individual bytes of memory one by one the system retrieves memory by accessing entire pages. Each page contains a number of bytes, referred to as the page size. To do this though the system must first translate virtual addresses into physical addresses to determine which page contains the requested memory.

To perform the translation the system first looks in the Translation Lookaside Buffers (TLB), these contain a limited number of the virtual-to-physical address mappings for the pages most recently or frequently accessed. When the mapping being sought is not in the TLB (sometimes referred to as a ‘TLB miss’) the processor must iterate through all of the page tables itself to determine the address mapping as if for the first time. This comes with a performance penalty which means that it is preferable to optimize the TLB in such a way as to ensure that the target process can avoid TLB misses if at all possible.

What are Huge Pages?

The page size in x86 systems is typically 4 KB which is considered an optimal page size for general purpose computing. While 4 KB is the typical page size other, larger page sizes are also available. Larger page sizes mean that there are fewer pages overall, and therefore increases the amount of system memory that can have its virtual to physical address translation stored in the TLB and as a result lowers the potential for TLB misses, which increases performance.

Conversely with larger page sizes there is also an increased potential for memory to be wasted as processes must allocate memory in pages but not all of the memory on the page may actually be required. As a result choosing a page size is a trade off between providing faster access times by using larger pages and ensuring maximum memory utilization by using smaller pages. There are other potential issues to consider as well. At a basic level processes that use large amounts of memory and-/or are otherwise memory intensive may benefit from larger page sizes, often referred to as large pages or huge pages.

In addition to the default 4 KB page size Red Hat Enterprise Linux 7 provides two mechanisms for making use of larger page sizes, Transparent Huge Pages (THP) and HugeTLB. Transparent Huge Pages are enabled by default and will automatically provide 2 Mb pages (or collapse existing 4 KB pages) for memory areas specified by processes.Transparent Huge Pages of sizes larger than 2 Mb, e.g. 1 Gb, are not currently supported as the CPU overhead involved in coalescing memory into a 1 Gb page at runtime is too high. Additionally there is no guarantee that the kernel will succeed in allocating Transparent Huge Pages though in which case the allocation will be provided in 4 KB pages instead.

You can reserve pages of a specified size upfront before they are needed. HugeTLB supports both 2 Mb and 1 Gb page sizes and is the way we will be allocating huge pages in the remainder of this post. Allocation of huge pages using HugeTLB is done by either passing parameters directly to the kernel during boot or by modifying values under the /sys filesystem at run time

Tuning huge page availability at runtime can be problematic though, particularly with 1 Gb pages, as when allocating new huge pages the kernel has to identify contiguous unused blocks of memory to make up each requested page. Once the system has started running processes their memory usage will gradually cause the system memory to become more and more fragmented making huge page allocation more difficult.

How do I pre-allocate Huge Pages?

Here we will focus on allocating huge pages at boot time using the kernel arguments hugepagesz which sets the size of the huge pages being requested, and hugepages which sets the number of pages to allocate. We will use grubby to set these kernel boot parameters, requesting 2048 pages that are 2 Mb in size:

   # grubby --update-kernel=ALL --args=”hugepagesz=2M hugepages=2048

As grubby only updates the grub configuration under /etc we must then use grub2-install to write the updated configuration to the system boot record. In this case the boot record is on /dev/sda but be sure to specify the correct location for your system:

   # grub2-install /dev/sda

Finally for the changes to take effect we must reboot the system:

   # shutdown -r now

Once the system has booted we can check /proc/meminfo to confirm that the pages were actually allocated as requested:

   # grep “Huge” /proc/meminfo
   AnonHugePages:      311296 kB
   HugePages_Total:    2048
   HugePages_Free:     2048
   HugePages_Rsvd:        0
   HugePages_Surp:        0
   Hugepagesize:       2048 kB

The output shows us that we have 2048 huge pages in total (HugePages_Total) of size 2 Mb (Hugepagesize) and that they are all free (HugePages_Free). Additionally in this particular case we can see that there are 311296 kB of Transparent Huge Pages (AnonHugePages).

Sharp readers may recall that the compute host we are using from the previous article has two NUMA nodes with four CPU cores in each (two reserved for host processes, and two reserved for guests):

Node 0 Node 1
Host Processes Core 0 Core 1 Core 4 Core 5
Guests Core 2 Core 3 Core 6 Core 7

In addition each of these nodes has 8 Gb of RAM:

# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3
node 0 size: 8191 MB
node 0 free: 6435 MB
node 1 cpus: 4 5 6 7
node 1 size: 8192 MB
node 1 free: 6634 MB
node distances:
node   0   1
 0:  10  20
 1:  20  10

When we allocated our huge pages using the hugepages kernel command line parameter the pages requested are split evenly across all available NUMA nodes. We can verify this by looking at the /sys/devices/system/node/ information:

   # cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
   # cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

While there is no way to change the per-node allocations via the kernel command line parameters you can modify these values, and as a result the per-node allocation of huge pages, at runtime simply by writing updated values to these files under /sys. To check if the kernel  was able to successfully apply the changes by allocating or deallocating huge pages as required simply read the values from the /sys filesystem again. A way to work around the inability to do per-node allocations at boot is to insert a script that modifies the /sys values fairly early in the initialization process.

This provides a mechanism for allocating huge pages at run time but there is the risk that the kernel will not be able to find enough contiguous memory free to allocate the requested number of pages. This can also possibly happen  when allocating huge pages at boot using the kernel parameters if not enough room is left for pages dirtied early in the boot process but is more likely once the system is fully booted.

How do I back guest memory with Huge Pages?

So now we have some huge pages available on our compute host, what do we need to do so that they are used for the memory allocated to the QEMU guests launched from our OpenStack cloud? OpenStack Compute in Red Hat Enterprise Linux OpenStack Platform 6 & 7, allows us to specify whether we want guests to use huge pages by flavor.

Specifically the hw:mem_page_size flavor extra specification key for enabling guest huge pages takes a value in bytes to indicate the size of the huge pages that should be used. The scheduler performs some accounting to keep track of the number and size of huge pages available on each compute host so that instances that require huge pages are not scheduled to a host where they are not available. Currently accepted values for the hw:mem_page_size flavor are large, small, any, 2048 kB, and 1048576 kB. The large and small values are actually short-hand for selecting the largest and smallest page sizes supported on the target system. On x86_64 systems this is 1048576 kB for large and 4 kB (normal) pages for small. Selecting any denotes that guests launched using the flavor will be backed by whichever sized huge pages happen to be available.

Building on the example from the previous post where we set up an instance with CPU pinning we will now extend the m1.small.performance flavor to also include 2M huge pages:

   $ nova flavor-key m1.small.performance set hw:mem_page_size=2048

The updated flavor extra specifications for our m1.small.performance flavor now include the required huge page size (hw:mem_page_size) in addition to the CPU pinning (hw:cpu_policy) and host aggregate (aggregate_instance_extra_specs:pinned) specifications covered in the previous article:

  • "aggregate_instance_extra_specs:pinned": "true"
  • "hw:cpu_policy": "dedicated"
  • "hw:mem_page_size": "2048"

Then to see the results of this change in action, we must boot an instance using the modified flavor:

   $ nova boot --image rhel-guest-image-7.1-20150224 \
               --flavor m1.small.performance numa-lp-test

The nova scheduler will endeavor to identify a host with enough huge pages of the size specified in the flavor free to back the memory of the instance. This task is accomplished by the NUMATopologyFilter (you may recall we enabled this in the previous post on CPU pinning) which will filter out hosts that don’t have enough huge pages either in total or on the desired NUMA node(s). If the scheduler is unable to find a host and NUMA node with enough pages then the request will fail with a NoValidHost error. In this case we have prepared a host specifically for this purpose with enough pages allocated and unused which will not be filtered out and as a result the instance boot request will succeed.

Once the instance has launched we can review the state of /proc/meminfo again:

# grep "Huge" /proc/meminfo
AnonHugePages:    669696 kB
HugePages_Total:    2048
HugePages_Free:     1024
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Here we can see that while previously there were 2048 huge pages free, only 1024 are available now. This is because the m1.small.performance flavor our instance was created from is based off the m1.small flavor which has 2048 Mb of RAM. So to back the instance with huge pages 1024 huge pages of 2048 kB each have been used.

We can also use the virsh command on the hypervisor to inspect the Libvirt XML for the new guest and confirm that it is in fact backed by huge pages as requested. Note that the NUMATopologyFilter scheduler filter will eliminate all compute nodes that do not have enough available huge pages to back the entirety of the guest RAM for the selected flavor. As a result if there are no compute nodes in the environment that have enough available huge pages scheduling will fail.

First we list the instances running on the hypervisor:

   # virsh list
    Id Name                        State
    1  instance-00000001           running

Then we display the Libvirt XML for the selected instance:

   $ virsh dumpxml instance-00000001
       <page size=’2048’ unit=’KiB’ nodeset=’0’/>
     <vcpupin vcpu='0' cpuset='2'/>
     <vcpupin vcpu='1' cpuset='3'/>
     <emulatorpin cpuset='2-3'/>
     <memory mode='strict' nodeset='1'/>
     <memnode cellid='0' mode='strict' nodeset='1'/>

This output is truncated, but once we identify the <memoryBacking> element we can see that it is indeed defined strictly to use 2 MB huge pages from node 0. The guest has also been allocated CPU cores 2 (<vcpupin vcpu=’0′ cpuset=’2’/>) and 3 (<vcpupin vcpu=’1′ cpuset=’3’/>) which you might recall are also collocated on NUMA node 0.

As a result we have been able to confirm that the guest’s virtual CPU cores and memory, are not only backed by huge pages but are all collocated on the same NUMA node to provide fast access without needing to cross node boundaries.

Red Hat Confirms Speaking Sessions at OpenStack Summit Tokyo

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — September 14, 2015

As this Fall’s OpenStack Summit in Tokyo approaches, the Foundation has posted the session agenda, outlining the final schedule of events. I am happy to report that Red Hat has nearly 20 sessions that will be included in the weeks agenda, along with a few more as waiting alternates. With the limited space and shortened event this time around, I am please to see that Red Hat continues to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in.

Red Hat is a Premiere sponsor in Tokyo this Fall and will have a dedicated sponsor presentation, along with our general accepted sessions. To learn more about Red Hat’s accepted sessions, have a look at the details below. Be sure to visit us at the below sessions and at our booth (P7). We look forward to seeing you in Tokyo in October!

For more details on each session, click on the title below:

Read the full post »

Big data in the open, private cloud

by Tim Gasper, Global Offering Manager, CSC — September 10, 2015

Organizations that take advantage of comprehensive insights from their data can gain a competitive edge. However, the ever-increasing amount of data coming in can make it hard to see trends. Adding to this challenge, many companies have data locked in silos, making it difficult—if not impossible—to gain critical insights. Big data technologies like Hadoop can help unify and organize data, but getting fast, meaningful insight still isn’t easy.

Organizations consistently face 4 main challenges when trying to implement big data initiatives:

  1. Setting up and operating a big data and analytics platform
  2. Attracting, managing, and applying big data and analytics skills
  3. Integrating insights into business processes
  4. Iterating quickly

Overcoming these challenges requires a new approach to big data: as a service. Applying cloud operating principles to big data can give you more flexibility, better resource utilization, higher scaling, and lower costs than traditional big data deployments. And, if you use a private cloud to host your big data platform, you can boost security and compliance and take advantage of internal network benefits.

Flexible, scalable, and cost-effective, OpenStack® provides an ideal cloud foundation for big data platforms. As a leading contributor to the OpenStack project, Red Hat makes OpenStack safe, secure, and consumable for business use. Red Hat® Enterprise Linux® OpenStack Platform is commercially hardened and incorporates the enterprise-grade features, security, reliability, and support you need for business operations.

With big data science and industry-specific business process expertise, CSC integrates big data analytics software, Red Hat Enterprise Linux OpenStack Platform, and Intel-based x86 servers into a comprehensive, hybrid cloud-based big data as a service offering. CSC Big Data Platform as a Service (BDPaaS) lets you get actionable insight from your data in as fast as 30 days—without needing deep expertise in big data platform development.

Supported and delivered as a service, CSC BDPaaS collects and centralizes data from any source and incorporates insights from that data into your business processes. Open platforms and commonly available, industry-standard components simplify deployment, improve flexibility, and reduce risk. The scalable, hybrid cloud architecture puts workloads to work where it makes the most sense for your organization—on-premise, in your private cloud, or even in a public cloud. Plus, security best practices are built in to help protect your data and insights.


CSC’s big data and industry experts customize each implementation to your unique environment and business needs. Additionally, many clients choose to engage CSC’s InsightLab. These professional services, including agile data science assistance, data engineering and integration, and data visualization, let you start taking advantage of your data faster and more effectively.

Many organizations across a variety of industries have successfully deployed CSC BDPaaS. Insurance, technology, logistics, media, advertising, healthcare, and manufacturing companies are benefiting from fast, integrated big data analysis. These customers are now able to enhance customer satisfaction, improve compliance, develop better products and services faster, with the goal of ultimately increasing sales. As an example, a leading US-based mutual insurance company was able to deploy a new, usage-based auto insurance offering to new states in half the time, while cutting big data operation costs by 50% and realizing more than 10% savings per account included in the program1.

As you can see, there’s a better way to meet your big data initiatives. By applying cloud concepts to big data, CSC gives you a comprehensive, Red Hat-powered platform that’s quick to deploy, customized to your business, and more cost-effective than building and maintaining a platform in house. And, CSC offers a full set of tools and services to help you along the way, from planning to operation to adapting over time. If you’re ready to start making your data work for you—fast—visit csc.com/bigdataindex or contact your Red Hat sales team for an assessment.




1Based on CSC-Red Hat client data, 2015


Managing OpenStack: Integration Matters!

by Matt Hicks, Vice President Software Engineering at Red Hat — September 8, 2015

With so many advantages an Infrastructure-as-a-Service (IaaS) cloud provides businesses, it’s great to see a transformation of IT happening across nearly all industries and markets. Nearly every enterprise is taking advantage of an “as-a-service” cloud in some form or another. And with this new infrastructure, it’s now more important than ever to remember the critical role that management plays within this mix. Oddly enough, it is sometimes considered a second priority when customers begin investigating the benefits of an IaaS cloud, but quickly becomes your first priority when running one.

At Red Hat, we believe that management plays a critical role in this next-generation datacenter.  And we believe cloud management should be open, agile, and integrated.  Let us explain how we are integrating several critical management capabilities to let you take OpenStack to it’s fullest potential.

Recently, we announced the general availability of Red Hat Enterprise Linux OpenStack Platform 7. This new release, based on the community “Kilo” OpenStack, is filled with hundreds of new features and functions, designed to further the advancement and adoption of OpenStack-based clouds. In particular, version 7 brought to market a brand new deployment and management tool to help ease the burden of new resource deployments and day-to-day operations management – it’s called, the Red Hat Enterprise Linux OpenStack Platform director.

Director is our new technology for deploying OpenStack on bare metal machines to establish a production-ready environment. Using the Ironic service, it has a unique approach to both discovering the hardware, planning the deployment (e.g. what components of OpenStack go where), executing the deployment, and ultimately providing long-term stability. It’s a very different approach than other competing installer tools, as it establishes a long-term instance that understands the deployment architecture and API’s that can be utilized to modify the deployment (aka. the undercloud). This allows functions like scaling up and down, and eventually update/upgrade within that topology architecture. This combination provides the basis for automating the management of OpenStack.

Red Hat Cloud Infrastructure customers for example, may use Satellite to deploy director and further leverage Satellite to provide a trusted, on-premise content stream that is utilized for the OpenStack deployment. Whether using Glance images, RPM’s for your guest instances, or Docker containers for your applications, Satellite adds an on-premise content and configuration management capability from your infrastructure to your guests.

After your environment is up and running, we can then leverage Red Hat CloudForms to provide a single management solution. In fact, while CloudForms today already manages the essential components over Red Hat Enterprise Linux OpenStack Platform, CloudForms 4 (coming soon) will add new functionality to manage both the director instance (i.e. the undercloud) and the running production OpenStack cloud (i.e. the overcloud), and orchestrate sophisticated management needs. Specifically, it will provide functionality such as automating a scale-out command to the undercloud when it detects a capacity issue in the overcloud. But the nice thing about CloudForms, is that it can also extend management to other domains like OpenShift (PaaS) container environments, VMWare environments, Red Hat Enterprise Virtualization environments, and even public cloud environments, like Amazon AWS. This provides cloud operators a single management view of their entire infrastructure, regardless of which vendor platform is being used. And, critically important for advanced customers, this can link the world of containers and OpenStack together from a single point of management.

On a final thought, after really listening to our customers and working to meet their unique needs, we quickly realized the importance of an integrated solution, rather than just trying to solve single problems, with single products. Red Hat has responded to this need by working towards fully integrated solution suites, in the form of Red Hat Cloud Infrastructure and the recently announced Red Hat Cloud Suite for Applications. These integrated solutions deliver a common installation experience that orchestrates across the various platforms to bring a cohesive experience to customers who choose to utilize multiple Red Hat offerings.

How Red Hat’s OpenStack partner Networking Solutions Offer Choice and Performance

by Jonathan Gershater — August 31, 2015

Successfully implementing an OpenStack cloud is more than just choosing an OpenStack distribution. With its community approach and rich ecosystem of vendors, OpenStack represents a viable option for cloud administrators who want to offer public-cloud-like infrastructure services in their own datacenter.  Red Hat Enterprise Linux OpenStack Platform offers pluggable storage and networking options.  This open approach is contrary to closed solutions such as VMware Integrated OpenStack (VIO) which only supports VMware NSX for L4-L7  networking or VMware Distributed switch for basic L2 networking .

Below are some of the networking partners who have certified their OpenStack Networking plugins with Red Hat Enterprise Linux OpenStack Platform and will be on display at VMworld 2015, San Francisco, at the Red Hat booth, 528; (Cisco is at booth 1721). See exhibitor map


Cisco ACI offers a consolidated overlay and underlay solution that can be fully automated via OpenStack and the Cisco APIC. This solution scales to over 180,000 virtual machines and thousands of hypervisor hosts without the introduction of centralized bottlenecks or gateways.  It offers deep telemetry and visibility, tying together the OpenStack environment with the physical infrastructure to vastly improve operations and troubleshooting.  The solution also offers an optional, intent-based interface called Group-Based Policy, which leverages ACI’s application-centric policy automation and service chaining capabilities.

Selected differentiators between Red Hat and Cisco vs VMware VIO and NSX:
Red Hat and Cisco VMware VIO and NSX
  • Fully distributed networking solution with no centralized gateways.
  • Simplified automation through Group-Based Policy.
  • NSX required for L4-L7 networking
  • NSX must be deployed into an Edge cluster

Nuage Networks

Utilizing an open plug-in to the Neutron framework of Red Hat OpenStack’s offering, Nuage Networks VSP provides an automated, real-time response to requests relayed from Red Hat Enterprise Linux OpenStack Platform. With the Red Hat and Nuage Networks SDN-based cloud solution, flexible, automated network configuration delivers instantaneous network connectivity so cloud applications can go live faster than with alternative approaches.

Selected differentiators between Red Hat and Nuage vs VMware VIO and NSX:
Red Hat and Nuage VMware VIO and NSX
  • Fully distributed control plane (Nuage Networks VSD) for scale and reliability
  • Federation across multiple clouds, including public clouds
  • Network templates free application developers from having to deal with network settings
  • Declarative policies are intelligently interpreted at each network and end point – across clouds, datacenters, hypervisors, and bare metal servers.
  • Constrained by VMware clusters
  • Must add more VMs manually to add more clusters
  • Clusters also control CPU/memory resources that VMs receive
  • Provides networking within one datacenter
  • Status quo – application developers must understand and configure network settings
  • Declarative policies are applied within VMware hypervisor only.


Juniper and Red Hat have collaborated to deliver a validated solution and collaborative support model based on Contrail Cloud Platform (based on Open Contrail) plus Red Hat Enterprise Linux OpenStack Platform for enterprise and provider cloud deployments.

Selected differentiators between Red Hat and Juniper vs VMware VIO and NSX
Red Hat and Juniper VMware and NSX
Open source (OpenContrail), open standards (IP-VPN), and open interfaces (REST API’s) into system ensure transparency, interoperability with multi-vendor physical networks and investment protection Lock in to VMware software stack
Simple policy definition and group-based policy enforcement automates network service insertion improves business agility Automation requires vRealize products


Midokura Enterprise MidoNet provides fully distributed and advanced L2 to L4 network services. Leveraging solid open source technologies like Apache Zookeeper and Cassandra, MidoNet brings flow processing to the edge of the network and improves performance inside the virtual network. Like most overlays, traffic is encapsulated and sent over the physical network between hosts. In MidoNet, the flow processing can be done at line speed because the MidoNet agent has knowledge of the virtual topology without going off-box to a central controller.

Selected differentiators between Red Hat and Midokura vs VMware VIO and NSX
Red Hat and Midokura VMware and NSX
Massive horizontal scale on distributed layer 3 gateways; scaling in MidoNet is simple, simply add nodes to scale; multi-data center support is through top-of-rack switches running the MidoNet agent Constrained by VMware technology to a single datacenter of modest size. No federation across cloud capabilities.
Can trace live and past flows and provide visibility into virtual network Limited to live flows


PLUMgrid Open Networking Suite is a leading cloud networking solution for Red Hat Enterprise Linux OpenStack Platform. PLUMgrid helps overcome the limitations of many other OpenStack networking solutions, by providing a rich set of high performance virtual network functions, end-to-end encryption, high availability features plus automated installation, management, analytics and operational tools.

Selected differentiators between Red Hat and PLUMgrid vs VMware VIO and NSX
Red Hat and PLUMgrid VMware VIO and NSX
  • PLUMgrid ONS is built on concept of Virtual Domains for micro-segmentation.
  • Fully distributed in-kernel portfolio of network and security functions

Vertically integrated single vendor solution

The value of a Red Hat certified solution means that customers get performance and reliability when choosing their vendors for their solution. Red Hat and its certified vendors work together to solve customer problems and provide best of breed solutions. Red Hat maintains a large ecosystem of certified hardware and software vendors across all products, specifically for OpenStack where there are more than 900  certified products.

Scaling NFV to 213 Million Packets per Second with Red Hat Enterprise Linux, OpenStack, and DPDK

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — August 19, 2015

Written by: Andrew Theurer, Principle Software Engineer

There is a lot of talk about NFV and OpenStack, but frankly not much hard data, showing us how well OpenStack can perform with technologies like DPDK. We at Red Hat want to know, and I suspect many of you do as well. So, we decided to see what RDO Kilo is capable of, by testing multiple Virtual Network Functions (VNFs), deployed and managed completely by OpenStack.

Creating the ultimate NFV compute node

In order to scale NFV performance to incredible levels, we need to start with a strong foundation -the hardware which makes up the compute nodes. A NFV compute node needs incredible I/O capability and very fast memory. We selected a server with 2 Intel Haswell-EP processors, 24 cores, 64GB memory @2133 MHz, and seven available PCI gen3 slots. We populated six of these PCI slots with Intel dual-port 40Gb adapters -that’s twelve 40Gb ports in one server!

Exploiting high performance hardware with Nova

The compute node we choose has the potential for amazing NFV performance, but only if it is configured properly. If you were not using OpenStack to deploy virtual machines, you need to ensure your deployment process properly chooses resources correctly -from node-local CPU, memory and I/O, to backing VM memory with 1GB pages. All of these are essential to getting top performance from your VMs. The good news is that OpenStack can do this for you. No longer are you required to get this “right”. The user only needs to prepare for PCI passthrough and then specify the resources via Nova flavor-key:

nova flavor-key pci-pass-40Gb set “hw:mem_page_size=1048576”

nova flavor-key pci-pass-40Gb set “pci_passthrough:alias”=”XL710-40Gb-PF:2”

When creating a new instance with this flavor, Nova will then ensure that the resources are node-local and the VM is backed with 1GB huge pages.

The Network Function under test

We deployed six VMs, using RHEL 7.1 and DPDK 2.0, each of them performing a basic VNF role: forwarding of layer-2 packets. DPDK (data plane development kit) is a set of libraries and drivers for incredibly fast packet processing. More information on DPDK is available here. Each VM includes of 2 x 40Gb interfaces, 3 vCPUs, and 6GB of memory. Forwarding of network packets was enabled for both ports (in one port, out the other), in both directions. You can think of this network function as a bridge or the base function of a firewall, to be located somewhere between your computer and a destination:


In this scenario, the “processing” we choose is packet forwarding, handled by the application, “testpmd”, which is included in the DPDK software. We choose this because we wanted to test the I/O throughput at the highest possible levels to confirm whether OpenStack Nova made the correct decisions regarding resource allocation. Once these VMs are provisioned, we have a compute node with:


We use a second system to generate network traffic, which happens to have identical hardware configuration as the compute node. This system acts as both the “computer/phone/device” and the “server” in our test scenario. For each VM, the packet generator generates traffic, sending to both of the VM’s ports, and the packet generator also receives traffic that the VM forwards. For our test metric, we count how many packets per second are transmitted, forwarded by the VM and finally returned to the packet generator system.


The test results

Note that we conduct this test with all six VMs processing packets at the same time. We used a packet size of 64 bytes in order to simulate the worst possible conditions for packet processing overhead. This allows us to drive to highest levels of packets-per-second without prematurely hitting a bandwidth limit. In this scenario, we are able to achieve 213 Million packets per second! Openstack and DPDK is operating at nearly the theoretical maximum packet rate for these network adapters! In fact, when we tested these two systems without Openstack or any virtualization, we observed 218 Million packets per second. Openstack with KVM is achieving 97.7% of bare-metal!

One other important aspect to consider is how much CPU are we using for this test. Is there enough to spare for more advanced network functions? Could we scale to more network functions? Below is a graph of CPU usage as observed from the compute node:


Although processing 213 Million packets per second is an incredible feat, this compute node still has ½ the system’s CPU unused! Each of the VMs are using 2 out of 3 vCPUs to perform packet forwarding, leaving 1 vCPU for more advanced packet processing. These VMs could also be provisioned with 4 vCPUs without over-committing host CPUs, providing even more compute resource to them.

Real results, and more to come

We will continue reporting performance tests like this, showing actual performance of NFV and OpenStack that we achieve in our tests. We are also working with groups like OPNFV to help standardize benchmarks like this, so stay tuned. We have a lot more to share!

Performance and Scaling your Red Hat Enterprise Linux OpenStack Platform Cloud

by Joe Talerico - Senior Performance Engineer — August 17, 2015
and Roger Lopez - Principal Software Engineer

As OpenStack continues to grow into a mainstream Infrastructure-as-a-service (IaaS) platform, the industry seeks to learn more about its performance and scalability for use in production environments. As recently captured in this blog, common questions that typically arise are: “Is my hardware vendor working with my software vendor?”, “How much hardware would I actually need?”, and “What are the best practices for scaling out my OpenStack environment?”  

These common questions are often difficult to answer because they rely on environment specifics. With every environment being different, often composed of products from multiple vendors, how does one go about finding answers to these generic questions?

To aid in this process, Red Hat Engineering has developed a reference architecture capturing  Guidelines and Considerations for Performance and Scaling of Red Hat Enterprise Linux OpenStack Platform 6-based cloud. The reference architecture utilizes common benchmarks to generate a load on a RHEL OpenStack Platform environment to answer these exact questions.

Where do I start?

With the vast amount of features that OpenStack provides, it also brings a lot of complexities to the table. The first place to start is not by trying to find performance & scaling results on an already running OpenStack environment, but to step back and take a look at the underlying hardware that is in-place to potentially run this OpenStack environment. This allows one to answer the questions “How much hardware do I need?” and “Is my hardware working as intended?” all while avoiding the complexities that can affect performance such as file systems, software configurations, and changes in the OS. A tool to answer these questions is the Automatic Health Check (AHC). AHC is a framework developed by eNovance to capture, measure and report a system’s overall performance by stress testing its CPU, memory, storage, and network. AHC’s main objective is to provide an estimation of a server’s capabilities and ensure its basic subsystems are running as intended. AHC uses tools such as sysbench, fio, and netperf and provides a series of benchmark tests that are fully automated to provide consistent results across multiple test runs. The test results are then captured and stored at a specified central location. AHC is useful when doing an initial evaluation of a potential OpenStack environments as well as post-deployment.  If a specific server causes problems, the same AHC non-destructive benchmark tests can be run on that server and the outcome could be compared with the initial results captured prior to deploying OpenStack. AHC is publically available open source project on GitHub via https://github.com/enovance/edeploy.

My hardware is optimal and ready, what’s next?

Deploy OpenStack! Once it is determined that the underlying hardware meets the specified requirements to drive an OpenStack environment, the next step is to go off and deploy OpenStack. While the installation of OpenStack itself can be complex,one of the keys to providing performance and scalability of the entire environment is to isolate network traffic to a specific NIC  for maximum bandwidth. The more NICs available within a system, the better. If you have questions on how to deploy RHEL OpenStack Platform 6, please take a look at Deploying Highly Available Red Hat Enterprise Linux OpenStack Platform 6 with Ceph Storage reference architecture.

Hardware optimal? Check. OpenStack installed? Check.

With hardware running optimally and OpenStack deployed, the focus turns towards validating the OpenStack environment using the open source tool Tempest.

Tempest is the tool of choice for this task as it contains a list of design principles for validating the OpenStack cloud by explicitly testing a number of scenarios to determine whether the OpenStack cloud is running as intended. The specifics on setting up Tempest can be found in this reference architecture.

Upon validating the OpenStack environment, the focus shifts to answering scalability and performance questions.  The two benchmarking tools used to do that are Rally and

Cloudbench (cbtool). Rally offers an assortment of actions to stress any OpenStack installation and the aforementioned  reference architecture has the details on how to use the benchmarking tools to test specific scenarios.

Cloudbench, cbtool, is a framework that automates IaaS cloud benchmarking by running a series of controlled experiments. An experiment is executed by the virtue of deploying and running a set of Virtual Applications (VApps). Within  our reference architecture, the workload VApp consists of two critical roles used for benchmarking, the orchestrator role and workload role.

Rally and CloudBench complement each other by providing the ability to benchmark different aspects of the OpenStack cloud thus offering different views on what to expect once the OpenStack cloud goes into production.


To recap, when trying to determine the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform installation make sure to follow these simple steps:

  1. Validate the underlying hardware performance using AHC
  2. Deploy Red Hat Enterprise Linux OpenStack Platform
  3. Validate the newly deployed infrastructure using Tempest
  4. Run Rally with specific scenarios that stress the control plane of OpenStack environment
  5. Run CloudBench (cbtool) experiments that stress applications running in virtual machines within OpenStack environment

In our next blog, we will take a look at specific Rally scenario and discuss how tweaking the OpenStack environment based upon Rally results  could allow us to achieve better performance. Stay tuned and check out our blog site often!


Upgrades are dying, don’t die with them

by Maxime Payant-Chartier, Technical Product Manager, Red Hat — August 12, 2015

We live in a world that has changed the way it consumes applications. The last few years have seen a rapid rise in the adoption of Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). Much of this can be attributed to the broad success of Amazon Web Services (AWS), which is said to have grown revenue from $3.1B to $5B last year (Forbes). More and more people, enterprise customers included, are consuming applications and resources that require little to no maintenance. And any maintenance that does happen, now goes unnoticed by users. This leaves traditional software vendors contending to find a way to adapt their distribution models to make their software easier to consume. Lengthy, painful upgrades are no longer acceptable to users, forcing vendors to create a solution to this problem.

Let’s face it, the impact of this on traditional software companies are starting to be felt. Their services and methods of doing business are now being compared to a newer, more efficient model. One that is not bogged down by the inefficiencies of the traditional model. SaaS has the advantage that the software runs in their datacenters, where they have easy access to it, control the hardware, the architecture, the configurations, and so on.

Open source initiatives that target the enterprise market, like OpenStack, have to look at what others are doing in order to appeal to it’s intended audience. The grueling release cycle of the OpenStack community (major releases every 6 months) can put undue pressure on enterprise IT teams to update, deploy, and maintain environments, often times leaving them unable to keep up from one release to the next. Inevitably, they start falling behind. And in some cases, their attempts to update is slower than the software release cycle, resulting in them falling further behind each release. This is a major hindrance to successful OpenStack adoption.

Solving only one side of the problem

Looking at today’s best practices for upgrading, we can see that the technology hasn’t quite matured yet. And, although DevOps allows companies to deliver code to customers faster, It doesn’t solve the problem of installing the new underlying infrastructure – faster is not enough.. This situation is even more critical when considering your data security practices. The ability to patch quickly and efficiently is key for companies to deploy security updates when critical security issues are spotted.

Adding to this further, is how businesses can shorten the feedback loop with development releases. Releasing an alpha or beta, waiting for people to test it and send relevant feedback is a long process that causes delays for both the customer and the provider. Yet another friction point.

Efforts are currently being made with community projects Tempest and Rally to provide better vision in a cloud’s stability and functionality. These two projects on their own are necessary steps in the right direction, however they currently lack holistic integration and still only offer a vision into a single cloud’s performance. Additionally, they do not yet allow for an OpenStack distribution provider to check if their distribution’s new versions work with specific configurations or hardware. Whatever the solution is, it has to compete with what is currently being offered in the “*aaS” or it will be seen as outdated and risk losing users.

Automation: A way out

Continuous integration and continuous delivery (CI/CD) is all the rage these days and it might offer part of the solution. Automation has to play a key role if companies are to keep up. We need to look into ways of making the process repeatable, reliable, incrementally improving, and customizable. Developers can no longer claim it worked on their laptop, so companies cannot limit themselves to saying it worked (or didn’t work) on their infrastructure. Software providers have to get closer to their customers to share in the pain.

Every OpenStack deployment is a custom job these days. Everyone is not running the same hardware, the same configurations, and so on. This means we have to adapt to those customizations and provide a framework that allows people to test their specific use cases. Once unit testing, integration testing, and functional testing has happened inside the walls of the software providers, it has to go out into the wild and survive real customer use cases. And just as important, feedback has to be received quickly in order for the next iterations to be smaller, which will ease the burden of identifying problems and fixing as needed.

One of the concepts Red Hat is investigating is chaining different CI environments and managing the logs and log analysis from a “central CI”. We’ve been working with customers to validate this concept by testing it first on customer and partner equipment for those who have been able to set aside equipment for us. We want to deploy a new version and verify an update live on premise and include this step into our gating process before merging code. We are not satisfied  unless it can be deployed and proven to work in a real environment. This means that CI/CD isn’t just about us anymore, it has to work on-site or a patch is not merged.

Currently in our testing, we receive status reports from different architectures which allows us to identify if an issue is specific to a certain configuration, hardware, or environment. This also allows us identify a more widespread issue that needs to be fixed in the release. Ideally, we envision a point where once a new version reaches a certain “acceptance threshold,” it is marked as ready for release. It’s then automatically pushed out and updated to a customer’s pre-production environment.

A workflow might look something like this:

Screen Shot 2015-08-12 at 11.44.53 AM

Source (modified): https://en.wikipedia.org/wiki/Continuous_delivery#/media/File:Continuous_Delivery_process_diagram.png

This type of workflow could integrate well into existing tools like Red Hat Satellite. Updates would still be provided as usual, but additional options to test upgrades leveraging the capabilities of the cloud would be made available. This would provide system administrators with an added level of certainty before deploying packages to existing servers, including logs to troubleshoot before pushing to production environments, should anything go wrong.  

Red Hat is committed to delivering a better and smoother upgrade experience for our customers and partners. While there are many questions that remain to be answered, notably around security or proprietary code, there is no doubt in my mind that this is the way forward for software. Automation has to take over the busy work of testing and upgrading to free up critical IT staff members to spend more time delivering features to their customers or users.

How to choose the best-fit hardware for your OpenStack deployment

by Jonathan Gershater — August 6, 2015

One of the benefits of OpenStack is the ability to deploy the software on standard x86 hardware, and thus not be locked-in to custom architectures and high prices from specialized vendors.

Before you select your x86 hardware, you might want to consider how you will resolve hardware/software related issues:

  • Is my distribution of OpenStack and the underlying Linux, certified to run on the hardware I use?
  • Will the vendor of my OpenStack distribution work with my hardware vendor to resolve issues?

There was a panel session  (Cisco, Ooyala, Sprint, and Shutterfly) on OpenStack use cases at the OpenStack Summit in Vancouver, May 2015. At the end, an audience member asked “How important is it that the OpenStack distribution is  certified to run on the hardware you use?

To listen to the panelists’ answer, (less than two minutes), click here:


From the video

Cisco’s Director of Engineering and Operations, Rafi Khardalian:

  1. “OpenStack is a sliver of a large stack of software you are running.”
  2. “The Linux kernel is a key component and it is vitally critical to test it against the hardware you are running, so that the Linux kernel is reliable and can offer all the features you need consumed up the stack into OpenStack….. Example
    1. How reliable is the VXLAN?
    2. We found better reliability with Intel cards vs Broadcom cards…. it is the maturity of the driver set.”

And Ilan Rabinovich from Ooyala:

  • “Some of the more painful experiences we experienced…that piece of hardware and that driver are not making friends…its where you spend the most time troubleshooting.”

Next Steps to Consider

Red Hat maintains a large ecosystem of certified hardware and software vendors across all products. Specifically for OpenStack there are more than 900 certified products.  Together with our certified hardware vendors, and over 20 years of Linux experience, Red Hat is well equipped to resolve issues across the entire stack: Hardware, Linux,  KVM hypervisor and OpenStack.

Voting Open for OpenStack Summit Tokyo Submissions: Container deployment, management, security and operations – oh my!

by Steve Gordon, Sr. Technical Product Manager, Red Hat — July 29, 2015

This week we have been providing a preview of Red Hat submissions for the upcoming OpenStack Summit to be held October 27-30, in Tokyo, Japan. Today’s grab bag of submissions focus on containers the relationship between them and OpenStack as well as how to deploy, manage, secure, and operate workloads using them. This was already a hotbed of new ideas and discussion at the last summit in Vancouver and we expect things will only continue to heat up in this area as a result of recent announcements in the lead up to Tokyo!

The OpenStack Foundation manages allows its members to vote the topics and presentations they would like to see as part of the selection process. To vote for one of the listed sessions, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

Application & infrastructure continuous delivery using OpenShift and OpenStack
  • Mike McGrath – Senior Principal Architect, Atomic @ Red Hat
Atomic Enterprise on OpenStack
  • Jonathon Jozwiak – Principal Software Engineer @ Red Hat
Containers versus Virtualization: The New Cold War?
  • Jeremy Eder – Principal Performance Engineer @ Red Hat
Container security: Do containers actually contain? Should you care?
  • Dan Walsh – Senior Principal Software Engineer @ Red Hat
Container Security at Scale
  • Scott McCarty – Product Manager, Container Strategy @ Red Hat
Containers, Kubernetes, and GlusterFS, a match made in Tengoku
  • Luis Pabón – Principal Software Engineer @ Red Hat
  • Stephen Watt – Chief Architect, Big Data @ Red Hat
  • Jeff Vance – Principal Software Engineer @ Red Hat
Converged Storage in hybrid VM and Container deployments using Docker, Kubernetes, Atomic and OpenShift
  • Stephen Watt – Chief Architect, Big Data @ Red Hat
Deploying and Managing OpenShift on OpenStack with Ansible and Heat
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat
  • Greg DeKoenigsberg –  Vice President, Community @ Ansible
  • Veer Michandi – Senior Solution Architect @ Red Hat
  • Ken Thompson – Senior Cloud Solution Architect @ Red Hat
  • Tomas Sedovic – Senior Software Engineer @ Red Hat
Deploying containerized applications across the Open Hybrid Cloud using Docker and the Nulecule spec
  • Tushar Katarki – Integration Architect @ Red Hat
  • Aaron Weitekamp – Senior Software Engineer @ Red Hat
Deploying Docker and Kubernetes with Heat and Atomic
  • Steve Gordon – Senior Technical Product Manager, OpenStack @ Red Hat
Develop, Deploy, and Manage Applications at Scale on an OpenStack based private cloud
  • James Labocki – Product Owner, CloudForms @ Red Hat
  • Brett Thurber – Principal Software Engineer @ Red Hat
  • Scott Collier – Senior Principal Software Engineer @ Red Hat
How to Train Your Admin
  • Aleksandr Brezhnev – Senior Principal Solution Architect @ Red Hat
  • Patrick Rutledge – Principal Solution Architect @ Red Hat
Minimizing or eliminating service outages via robust application life-cycle management with container technologies
  • Tushar Katarki – Integration Architect @ Red Hat
  • Aaron Weitekamp – Senior Software Engineer @ Red Hat
OpenStack and Containers Advanced Management
  • Federico Simoncelli – Principal Software Engineer @ Red Hat
OpenStack & The Future of the Containerized OS
  • Daniel Riek – Senior Director, Systems Design & Engineering @ Red Hat
Operating Enterprise Applications in Docker Containers with Kubernetes and Atomic Enterprise
  • Mike McGrath – Senior Principal Architect, Atomic @ Red Hat
Present & Future-proofing your datacenter with SDS & OpenStack Manila
  • Luis Pabón – Principal Software Engineer @ Red Hat
  • Sean Murphy – Product Manager, Red Hat Storage @ Red Hat
  • Sean Cohen – Principal Product Manager, OpenStack @ Red Hat
Scale or Fail – Scaling applications with Docker, Kubernetes, OpenShift, and OpenStack
  • Grant Shipley – Senior Manager @ Red Hat
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat

Thanks for taking the time to help shape the next OpenStack summit!

Voting Open for OpenStack Summit Tokyo Submissions: Deployment, management and metering/monitoring

by Keith Basil, Principal Product Manager, Red Hat — July 28, 2015

Another cycle, another OpenStack Summit, this time on October 27-30 in Tokyo. The Summit is the best opportunity for the community to gather and share knowledge, stories and strategies to move OpenStack forward. With more than 200 breakout sessions, hands-on workshops, collaborative design sessions, tons of opportunity for networking and perhaps even some sightseeing, the Summit is the even everyone working or planning to work with OpenStack should attend.

Critical subjects, awesome sessions

To select those 200+ sessions the community proposes talks that are selected by your vote, and we would like to showcase our proposed sessions about some of the most critical subjects of an OpenStack cloud: deployment, management and metering/monitoring.

There are multiple ways to deploy, manage and monitor clouds, but we would like to present our contributions to the topic, sharing both code and vision to tackle this subject now and in the future. With sessions about TripleO, Heat, Ironic, Puppet, Ceilometer, Gnocchi and troubleshooting, we’ll cover the whole lifecycle of OpenStack, from planning a deployment, to actually executing and then monitoring and maintaining it on the long term. Click on the links below to read the abstracts and vote your the topics you want to see in Tokyo.

Deployment and Management

OpenStack on OpenStack (TripleO): First They Ignore You..
  • Dan Sneddon – Principal OpenStack Engineer @ Red Hat
  • Keith Basil – Principal Product Manager, OpenStack Platform @ Red Hat
  • Dan Prince – Principal Software Engineer @ Red Hat
Installers are dead, deploying our bits is a continuous process
  • Nick Barcet – Director of OpenStack Product Management @ Red Hat
  • Keith Basil – Principal Product Manager, OpenStack Platform @ Red Hat
TripleO: Beyond the Basic Openstack Deployment
  • Steven Hillman – Software Engineer @ Cisco Systems
  • Shiva Prasad Rao – Software Engineer @ Cisco Systems
  • Sourabh Patwardhan – Technical Leader @ Cisco Systems
  • Saksham Varma – Software Engineer @ Cisco Systems
  • Jason Dobies – Principal Software Engineer @ Red Hat
  • Mike Burns – Senior Software Engineer @ Red Hat
  • Mike Orazi – Manager, Software Engineering @ Red Hat
  • John Trowbridge – Software Engineer, Red Hat @ Red Hat
Troubleshoot Your Next Open Source Deployment
  • Lysander David – IT Infrastructure Architect @ Symantec
Advantages and Challenges of Deploying OpenStack with Puppet
  • Colleen Murphy – Cloud Software Engineer @ HP
  • Emilien Macchi – Senior Software Engineer @ Red Hat
Cloud Automation: Deploying and Managing OpenStack with Heat
  • Snehangshu Karmakar – Cloud Curriculum Manager @ Red Hat
Hands-on lab: Deploying Red Hat Enterprise Linux OpenStack Platform
  • Adolfo Vazquez – Curriculum Manager @ Red Hat
TripleO and Heat for Operators: Bringing the values of Openstack to Openstack Management
  • Graeme Gillies – Principal Systems Administrator @ Red Hat
The omniscient cloud: How to know all the things with bare-metal inspection for Ironic
  • Dmitry Tantsur – Software Engineer @ Red Hat
  • John Trowbridge – Software Engineer @ Red Hat
Troubleshooting A Highly Available Openstack Deployment.
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Tuning HA OpenStack Deployments to Maximize Hardware Capabilities
  • Vinny Valdez – Sr. Principal Cloud Architect @ Red Hat
  • Ryan O’Hara – Principal Software Engineer @ Red Hat
  • Dan Radez – Sr. Software Engineer @ Red Hat
OpenStack for Architects
  • Michael Solberg – Chief Field Architect @ Red Hat
  • Brent Holden – Chief Field Architect @ Red Hat
A Day in the Life of an Openstack & Cloud Architect
  • Vijay Chebolu – Practice Lead @ Red Hat
  • Vinny Valdez – Sr. Principal Cloud Architect @ Red Hat
Cinder Always On! Reliability and scalability – Liberty and beyond
  • Michał Dulko – Software Engineer @ Intel
  • Szymon Wróblewski – Software Engineer @ Intel
  • Gorka Eguileor – Senior Software Engineer @ Red Hat

Metering and Monitoring

Storing metrics at scale with Gnocchi, triggering with Aodh
  • Julien Danjou – Principal Software Engineer @ Red Hat

Voting Open for OpenStack Summit Tokyo Submissions: Storage Spotlight

by Sean Cohen, Principal Technical Product Manager, Red Hat —

The OpenStack Summit will take place on October 27-30 in Tokyo, will be a five-day conference for OpenStack contributors, enterprise users, service providers, application developers and ecosystem members.  Attendees can expect visionary keynote speakers, 200+ breakout sessions, hands-on workshops, collaborative design sessions and lots of networking. In keeping with the Open-Source spirit, you are in the front seat to cast your vote for the sessions that are important to you!

Today we will take a peak at some recommended storage related session proposals for the Tokyo summit, be sure to vote for your favorites! To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

Block Storage

OpenStack Storage State of the Union
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Flavio Percoco, Senior Software Engineer @ Red Hat
  • Jon Bernard ,Senior Software Engineer @ Red Hat
Ceph and OpenStack: current integration and roadmap
  • Josh Durgin, Senior Software Engineer @ Red Hat
  • Sébastien Han, Senior Cloud Architect @ Red Hat
State of Multi-Site Storage in OpenStack
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Neil Levine, Director of Product Management @ Red Hat
  • Sébastien Han, Senior Cloud Architect @ Red Hat
Block Storage Replication with Cinder
  • John Griffith, Principal Software Engineer @ SolidFire
  • Ed Balduf, Cloud Architect @ SolidFire
Sleep Easy with Automated Cinder Volume Backup
  • Lin Yang, Senior Software Engineer @ Intel
  • Lisa Li Software, Engineer @ Intel
  • Yuting Wu, Engineer @ Awcloud
Flash Storage and Faster Networking Accelerate Ceph Performance
  • John Kim, Director of Storage Marketing @ Mellanox Technologies
  • Ross Turk, Director of Product Marketing @ Red Hat Storage

File Storage

Manila – An Update from Liberty
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Akshai Parthasarathy Technical Marketing Engineer @ NetApp
  • Thomas Bechtold, OpenStack Cloud Engineer @ SUSE
Manila and Sahara: Crossing the Desert to the Big Data Oasis
  • Ethan Gafford, Senior Software Engineer @ Red Hat
  • Jeff Applewhite, Technical Marketing Engineer, NetApp
  • Weiting Chen, Software Engineer @ Intel
GlusterFS making things awesome for Swift, Sahara, and Manila.
  • Luis Pabón, Principal Software Engineer @Red Hat
  • Thiago da Silva, Senior Software Engineer @ Red Hat
  • Trevor McKay, Senior Software Engineer @ Red Hat

Object Storage

Benchmarking OpenStack Swift
  • Thiago da Silva, Senior Software Engineer @ Red Hat
  • Christian Schwede, Principal Software Engineer @ Red Hat
Truly durable backups with OpenStack Swift
  • Christian Schwede, Principal Software Engineer @ Red Hat
Encrypting Data at Rest: Let’s Explore the Missing Piece of the Puzzle
  • Dave McCowan, Technical Leader, OpenStack @ Cisco
  • Arvind Tiwari, Technical Leader @ Cisco


DevOps, Continuous Integration, and Continuous Delivery

by Maxime Payant-Chartier, Technical Product Manager, Red Hat —

As we all turn our eyes towards Tokyo for the next OpenStack Summit edition the time has come to make your voice heard as to which talks you would like to attend while you are there. Remember, even if you are not attending the live event many sessions get recorded and can be viewed later so make your voice heard and influence the content!

Let me suggest a couple talks under the theme of DevOps, Continuous Integration, and Continuous Delivery – remember to vote for your favorites by midnight Pacific Standard Time on July 30th and we will see you in Tokyo!

Continuous Integration is an important topic, we can see this through   the amount of effort  deployed by the OpenStack CI team. OpenStack deployments all over the globe cover a wide range of possibilities (NFV, Hosting, extra services, advanced data storage, etc). Most of them come with their technical specificities including hardware, uncommon configuration, network devices, etc.

This make these OpenStack installation unique and hard to test. If we want to properly make them fit in the CI process, we need new methodology and tooling.

Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization.

In this talk we’ll give you an overview of a platform, called Software Factory, that we develop and use at Red Hat. It is an open source platform that is inspired by the OpenStack’s development’s workflow and embeds, among other tools, Gerrit, Zuul, and Jenkins. The platform can be easily installed on an OpenStack cloud thanks to Heat and can rely on OpenStack to perform CI/CD of your applications.

One of the best success stories to come out of OpenStack is the Infrastructure project. It encompasses all of the systems used in the day-to-day operation of the OpenStack project as a whole. More and more other projects and companies are seeing the value of the OpenStack git workflow model and are now running their own versions of OpenStack continuous integration (CI) infrastructure. In this session, you’ll learn the benefits of running your own CI project, how to accomplish it, and best practices for staying abreast of upstream changes.

In order to provide better quality while keeping up on the growing number of projects and features lead Red Hat to adapt it’s processes.  Moving from a 3 team process (Product Management, Engineering and QA) to a feature team approach each embedding all the actors of the delivery process was one of the approach we took and which we are progressively spreading.

We delivered a very large number of components that needs to be engineered together to deliver their full value, and which require delicate assembly as they work together as a distributed system. How can we do this is in in a time box without giving up on quality?

Learn how to get a Vagrant environment running as quickly as possible, so that you can start iterating on your project right away.

I’ll show you an upstream project called Oh-My-Vagrant that does the work and adds all the tweaks to glue different Vagrant providers together perfectly.

This talk will include live demos of building docker containers, orchestrating them with kubernetes, adding in some puppet, and all glued together with vagrant and oh-my-vagrant. Getting familiar with these technologies will help when you’re automating Openstack clusters.

In the age of service, core builds become a product in the software supply chain. Core builds shift from a highly customized stack which meets ISV software requirements to an image which provides a set of features. IT Organization shift to become product driven organizations.

This talk will dive into the necessary organizational changes and tool changes to provide a core build in the age of service and service contracts.



We will start with a really brief introduction of Openstack services we will use to build our app. We’ll cover all of the different ways you can control an OpenStack cloud: a web user interface, the command line interface, a software development kit (SDK), and the application programming interface (API).

After this brief introduction on the tools we are going to use in our hands on lab we’ll get our hands dirty and build a application that will make use of an OpenStack cloud.

This application will utilize a number of OpenStack services via an SDK to get its work done. The app will demonstrate how OpenStack services can be used as base to create a working application.

Voting Open for OpenStack Summit Tokyo Submissions: Networking, Telco, and NFV

by Nir Yechiel

The next OpenStack Summit is just around the corner, October 27-30, in Tokyo, Japan, and we would like your help shaping the agenda. The OpenStack Foundation manages voting by allowing its members to choose the topics and presentations they would like to see.

Virtual networking and software-defined networking (SDN) has become an increasingly exciting topic in recent years, and a great focus for us at Red Hat. It also lays the foundation for network functions virtualization (NFV) and the recent innovation in the telecommunication service providers space.

Here you can find networking and NFV related session proposals from Red Hat and our partners. To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

OpenStack Networking (Neutron)

OpenStack Networking (Neutron) 101
  • Nir Yechiel – Senior Technical Product Manager @ Red Hat
Almost everything you need to know about provider networks
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Why does the lion’s share of time and effort goes to troubleshooting Neutron?
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Neutron Deep Dive – Hands On Lab
  • Rhys Oxenham – Principal Product Manager @ Red Hat
  • Vinny Valdez – Senior Principal Cloud Architect @ Red Hat
L3 HA, DVR, L2 Population… Oh My!
  • Assaf Muller – Senior Software Engineer @ Red Hat
  • Nir Yechiel – Senior Technical Product Manager @ Red Hat
QoS – a Neutron n00bie
  • Livnat Peer – Senior Engineering Manager @ Red Hat
  • Moshe Levi – Senior Software Engineer @ Mellanox
  • Irena Berezovsky – Senior Architect @ Midokura
Clusters, Routers, Agents and Networks: High Availability in Neutron
  • Florian Haas – Principal Consultant @ hastexo!
  • Livnat Peer – Senior Engineering Manager @ Red Hat
  • Adam Spiers – Senior Software Engineer @ SUSE

Deploying networking (TripleO)

TripleO Network Architecture Deep-Dive and What’s New
  • Dan Sneddon – Principal OpenStack Engineer @ Red Hat

Telco and NFV

Telco OpenStack Cloud Deployment with Red Hat and Big Switch
  • Paul Lancaster – Strategic Partner Development Manager @ Red Hat
  • Prashant Gandhi – VP Products & Strategy @ Big Switch
OpenStack NFV Cloud Edge Computing for One Cloud
  • Hyde Sugiyama – Senior Principal Technologist @ Red Hat
  • Timo Jokiaho – Senior Principal Technologist @ Red Hat
  • Zhang Xiao Guang – Cloud Project Manager @ China Mobile
Rethinking High Availability for Telcos in the new world of Network Functions Virtualization (NFV)
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat

Performance and accelerated data-plane

Adding low latency features in Openstack to address Cloud RAN Challenges
  • Sandro Mazziotta – Director NFV Product Management @ Red Hat
Driving in the fast lane: Enhancing OpenStack Instance Performance
  • Stephen Gordon – Senior Technical Product Manager @ Red Hat
  • Adrian Hoban – Principal Engineer, SDN/NFV Orchestration @ Intel
OpenStack at High Speed! Performance Analysis and Benchmarking
  • Roger Lopez – Principal Software Engineer @ Red Hat
  • Joe Talerico – Senior Performance Engineer @ Red Hat
Accelerate your cloud network with Open vSwitch (OVS) and the Data Plane Development Kit (DPDK)
  • Adrian Hoban – Principal Engineer, SDN/NFV Orchestration @ Intel
  • Seán Mooney  – Network Software Engineer @ Intel
  • Terry Wilson – Senior Software Engineer @ Red Hat

Voting Open for OpenStack Summit Tokyo Submissions: OpenStack for the Enterprise

by Steve Gordon, Sr. Technical Product Manager, Red Hat —

In the lead up to OpenStack Summit Hong Kong, the last OpenStack Summit held in the Asia-Pacific region, Radhesh Balakrishnan – General Manager for OpenStack at Red Hat – defined this site as the place to follow us on our journey taking community projects to enterprise products and solutions.

We are excited to now be preparing to head back to the Asia-Pacific region for OpenStack Summit Tokyo – October 27-30 – to share just how far we have come on that journey with host of session proposals focussing on enterprise requirements and the success of OpenStack in this space. The OpenStack Foundation manages voting by allowing its members to choose the topics and presentations they would like to see.

To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Vote for your favorites by midnight Pacific Standard Time on July 30th and we will see you in Tokyo!

Is OpenStack ready for the enterprise? Is the enterprise ready for OpenStack?

Can I use OpenStack to build an enterprise cloud?
  • Alessandro Perilli – General Manager, Cloud Management Strategies @ Red Hat
Elephant in the Room: What’s the TCO for an OpenStack cloud?
  • Massimo Ferrari – Director, Cloud Management Strategy @ Red Hat
  • Erich Morisse – Director, Cloud Management Strategy @ Red Hat
The Journey to Enterprise Primetime
  • Arkady Kanevsky – Director of Development @ Dell
  • Das Kamhout – Principal Engineer @ Intel
  • Fabio Di Nitto – Manager, Software Engineering @ Red Hat
  • Nick Barcet – Director of OpenStack Product Management @ Red Hat
Organizing IT to Deliver OpenStack
  • Brent Holden – Chief Cloud Architect @ Red Hat
  • Michael Solberg – Chief Field Architect @ Red Hat
How Customers use OpenStack to deliver Business Applications
  • Matthias Pfützner – Cloud Solution Architect @ Red Hat
Stop thinking traditional infrastructure – Think Cloud! A recipe to build a successful cloud environment
  • Laurent Domb – Cloud Solution Architect @ Red Hat
  • Narendra Narang – Cloud Storage Solution Architect @ Red Hat
Breaking the OpenStack Dream – OpenStack deployments with business goals in mind
  • Laurent Domb – Cloud Solution Architect @ Red Hat
  • Narendra Narang – Cloud Storage Solution Architect @ Red Hat

Enterprise Success Stories

OpenStack for robust and reliable enterprise private cloud: An analysis of current capabilities, gaps, and how they can be addressed.
  • Tushar Katarki – Integration Architect @ Red Hat
  • Rama Nishtala – Architect @ Cisco
  • Nick Gerasimatos – Senior Director of Cloud Services – Engineering @ FICO
  • Das Kamhout – Principal Engineer @ Intel
Verizon’s NFV Learnings
  • Bowen Ross – Global Account Manager @ Red Hat
  • David Harris – Manager, Network Element Evolution Planning @ Verizon
Cloud automation with Red Hat CloudForms: Migrating 1000+ servers from VMWare to OpenStack
  • Lan Chen – Senior Consultant @ Red Hat
  • Bill Helgeson – Principal Domain Architect @ Red Hat
  • Shawn Lower – Enterprise Architect @ Red Hat

Solutions for the Enterprise

RHCI: A comprehensive Solution for Private IaaS Clouds
  • Todd Sanders – Director of Engineering @ Red Hat
  • Jason Rist – Senior Software Engineer @ Red Hat
  • John Matthews – Senior Software Engineer @ Red Hat
  • Tzu-Mainn Chen – Senior Software Engineer @ Red Hat
Cisco UCS Integrated Infrastructure for Red Hat OpenStack
  • Guil Barros – Principal Product Manager, OpenStack @ Red Hat
  • Vish Jakka – Product Manager, UCS Solutions @ Cisco
Cisco UCS & Red Hat OpenStack: Upstream Partnership to Streamline OpenStack
  • Guil Barros – Principal Product Manager, OpenStack @ Red Hat
  • Vish Jakka – Product Manager, UCS Solutions @ Cisco
  • Arek Chylinski – Technologist @ Intel
Deploying and Integrating OpenShift on Dell’s OpenStack Cloud Reference Architecture
  • Judd Maltin – Systems Principal Engineer @ Dell
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat
Scalable and Successful OpenStack Deployments on FlexPod
  • Muhammad Afzal – Architect, Engineering @ Cisco
  • Dave Cain Reference Architect and Technical Marketing Engineer @ NetApp
Simplifying Openstack in the Enterprise with Cisco and Red Hat
  • Karthik Prabhakar – Global Cloud Technologist @ Red Hat
  • Duane DeCapite – Director of Product Management, OpenStack @ Cisco
It’s a team sport: building a hardened enterprise ecosystem
  • Hugo Rivero – Senior Manager, Ecosystem Technology Certification @ Red Hat
Dude, this isn’t where I parked my instance!?
  • Steve Gordon – Senior Technical Product Manager, OpenStack @ Red Hat
Libguestfs: the ultimate disk-image multi-tool
  • Luigi Toscano – Senior Quality Engineer @ Red Hat
  • Pino Toscano – Software Engineer @ Red Hat
Which Third party OpenStack Solutions should I use in my Cloud?
  • Rohan Kande – Senior Software Engineer @ Red Hat
  • Anshul Behl – Associate Quality Engineer @ Red Hat

Securing OpenStack for the Enterprise

Everything You Need to Know to Secure an OpenStack Cloud (but Were Afraid to Ask)
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat
  • Ted Brunell – Senior Solution Architect @ Red Hat
Towards a more Secure OpenStack Cloud
  • Paul Lancaster – Strategic Partner Development Manager @ Red Hat
  • Malini Bhandaru – Architect & Engineering Manager @ Intel
  • Dan Yocum – Senior Operations Manager, Red Hat
Hands-on lab: configuring Keystone to trust your favorite OpenID Connect Provider.
  • Pedro Navarro Perez – Openstack Specialized Solution Architect @ Red Hat
  • Francesco Vollero – Openstack Specialized Solution Architect @ Red Hat
  • Pablo Sanchez – Openstack Specialized Solution Architect @ Red Hat
Securing OpenStack with Identity Management in Red Hat Enterprise Linux
  • Nathan Kinder – Software Engineering Manager @ Red Hat
Securing your Application Stacks on OpenStack
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat
  • Diane Mueller – Director, Community Development for OpenShift @ Red Hat

Celebrating Kubernetes 1.0 and the future of container management on OpenStack

by Steve Gordon, Sr. Technical Product Manager, Red Hat — July 24, 2015

This week, together with Google and others we celebrated the launch of Kubernetes 1.0 at OSCON in Portland as well as the launch of the Cloud Native Computing Foundation or CNCF (https://cncf.io/), of which Red Hat, Google, and others are founding members. Kubernetes is an open source system for managing containerized applications providing basic mechanisms for deployment, maintenance, and scaling of applications. The project was originally created by Google and is now developed by a vibrant community of contributors including Red Hat.

As a leading contributor to both Kubernetes and OpenStack it was also recently our great pleasure to welcome Google to the OpenStack Foundation. We look forward to continuing to work with Google and others on combining the container orchestration and management capabilities of Kubernetes with the infrastructure management capabilities of OpenStack.

Red Hat has invested heavily in Kubernetes since joining the project shortly after it was launched in June 2014, and are now the largest corporate contributor of code to the project other than Google itself. The recently announced release of Red Hat’s platform-as-a-service offering, OpenShift v3, is built around Kubernetes as the framework for container orchestration and management.

As a founding member of the OpenStack Foundation we have been working on simplifying the task of deploying and managing container hosts – using Project Atomic –  and configuring a Kubernetes cluster on top of OpenStack infrastructure using the Heat orchestration engine.

To that end Red Hat engineering created the heat-kubernetes orchestration templates to help accelerate research and development into providing deeper integration between Kubernetes and the underlying OpenStack infrastructure. The templates continue to evolve to include coverage for other aspects of container workload management such as auto-scaling and were recently demonstrated at Red Hat summit:

The heat-kubernetes templates were also ultimately leveraged in bootstrapping the OpenStack Magnum project which provides an OpenStack API for provisioning container clusters using underlying orchestration technologies including Kubernetes. The aim of this is to make containers first class citizens within OpenStack just like virtual machines and bare-metal before them, with the ability to share tenant infrastructure resources (e.g. networking and storage) with other OpenStack-managed virtual machines, baremetal hosts, and the containers running on them. Providing this level of integration requires providing or expanding OpenStack implementations of existing Kubernetes plug-in points as well as defining new plug-in APIs where necessary while maintaining the technical independence of the solution. All this must be done while allowing application workloads to remain independent of the underlying infrastructure and allowing for true open hybrid cloud operation. Similarly on the OpenStack side additional work is required so that the infrastructure services are able to support the use cases presented by container-based workloads and remove redundancies between the application workloads and the underlying hardware to optimize performance while still providing for secure operation.

Containers on OpenStack Architecture

Magnum, and the OpenStack Containers Team, provide a focal point to coordinate these research and development efforts across multiple upstream projects as well as other projects within the OpenStack ecosystem itself to achieve the goal of providing a rich container-based experience on OpenStack infrastructure.

As a leading contributor to both OpenStack and Kubernetes we at Red Hat look forward to continuing to work on increased integration with both the OpenStack and Kubernetes communities and our technology partners at Google as these exciting technologies for managing the “data-centers of the future” converge.

Containerize OpenStack with Docker

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — July 16, 2015

Written by: Ryan Hallisey

Today in the cloud space, a lot of buzz in the market stems from Docker and providing support for launching containers on top of an existing platform. However, what is often overlooked is the use of Docker to improve deployment of the infrastructure platforms themselves; in other words, the ability to ship your cloud in containers.



Ian Main and I took hold of a project within the OpenStack community to address this unanswered question: Project Kolla. Being one of the founding members and core developers for the project, I figured we should start by using Kolla’s containers to get this work off the ground. We began by deploying containers one by one in an attempt to get a functioning stack. Unfortunately, not all of Kolla’s containers were in great shape and they were being deployed by Kubernetes. First, we decided to get the containers working, then deal with how they’re managed later. In the short term, we used a bash script to launch our containers, but it got messy as Kubernetes was opening up ports to the host and declaring environment variables for the containers, and we needed to do the same. Eventually, we upgraded the design to use an environment file that was populated by a script, which proved to be more effective. This design was adopted by Kolla and is still being used today[1].

With our setup script intact, we started a hierarchical descent though the OpenStack services, starting with MariaDB, RabbitMQ, and Keystone. Kolla’s containers were in great shape for these three services, and we were able to get them working relatively quickly. Glance was next, and it proved to be quite a challenge. Quickly, we learned that the Glance API container and Keystone were causing one another to fail.



The culprit was that Glance API and Keystone containers were racing to see which could create the admin user first. Oddly enough, these containers worked with Kubernetes, but I then realized Kubernetes restarts containers until they succeed, avoiding the race conditions we were seeing. To get around this, we made Glance and the rest of the services wait for Keystone to be active before they start. Later, we pushed this design into Kolla, and learned that Docker has a restart flag that will force containers to restart if there is an error.[2] We added the restart flag to our design so that containers will be independent of one another.

The most challenging service to containerize was Nova. Nova presented a unique challenge not only because it was made up of the most number of containers, but because it required the use of super privileged containers. We started off using Kolla’s containers, but quickly learned there were many components missing. Most significantly, the Nova Compute and Libvirt containers were not mounting the correct host’s directories, exposing us to one of the biggest hurdles when containerizing Nova: persistent data and making sure instances still exist after you kill the container. In order for that to work, Nova Compute and Libvirt needed to mount /var/lib/nova and /var/lib/libvirt from the host into the container. That way, the data for the instances is stored on the host and not in the container[3].


echo Starting nova compute

docker run -d –privileged \

            –restart=always \

            -v /sys/fs/cgroup:/sys/fs/cgroup \

            -v /var/lib/nova:/var/lib/nova \

            -v /var/lib/libvirt:/var/lib/libvirt \

            -v /run:/run \

            -v /etc/libvirt/qemu:/etc/libvirt/qemu \

            –pid=host –net=host \

            –env-file=openstack.env kollaglue/centos-rdo-nova-compute-nova:latest


A second issue we encountered when trying to get the Nova Compute container working was that we were using an outdated version of Nova. The Nova Compute container was using Fedora 20 packages, while the other services were using Fedora 21. This was our first taste of having to do an upgrade using containers. To fix the problem, all we had to do was change where Docker pulled the packages from and rebuild the container, effectively a one line change in the Dockerfile:

From Fedora:20

MAINTAINER Kolla Project (https://launchpad.net/kolla)


From Fedora:21

MAINTAINER Kolla Project (https://launchpad.net/kolla)

OpenStack services have independent lifecycles making it difficult to perform rolling upgrades and downgrades. Containers can bridge this gap by providing an easy way to handle upgrading and downgrading your stack.

Once we completed our maintenance on the Kolla containers, we turned our focus to TripleO[4]. TripleO is a project in the OpenStack community that aims to install and manage OpenStack. The name TripleO means OpenStack on OpenStack, where it deploys a so called undercloud, and uses that OpenStack setup to deploy an overcloud, also known as the user cloud.

Our goal was to use the undercloud to deploy a containerized overcloud on bare metal. In our design, we chose to deploy our overcloud on top of Red Hat Enterprise Linux Atomic Host[5]. Atomic is a bare bones Red Hat Enterprise Linux-based operating system that is designed to run containers. This was a perfect fit because it’s a bare and simple environment with nice set of tools for launching containers.


[heat-admin@t1-oy64mfeu2t3-0-zsjhaciqzvxs-controller-twdtywfbcxgh ~]$ atomic –help

Atomic Management Tool

positional arguments:



host                            execute Atomic host commands

info                             display label information about an image

install                          execute container image install method

stop                            execute container image stop method

run                               execute container image run method

uninstall                      execute container image uninstall method

update                        pull latest container image from repository

optional arguments:

-h, –help                  show this help message and exit


Next, we had help from Rabi Mishra in creating a Heat hook that would allow Heat to orchestrate container deployment. Since we’re on Red Hat Enterprise Linux Atomic Host, the hook was running in a container and it would start the heat agents; thus allowing for heat to communicate with Docker[6]. Now we had all the pieces we needed.

In order to integrate our container work with TripleO, it was best for us to copy Puppet’s overcloud deployment implementation and apply our work to it. For our environment, we used devtest, the TripleO developer environment, and started to build a new Heat template. One of the biggest differences between using containers and Puppet, was that Puppet required a lot of setup and config to make sure dependencies were resolved and services were being properly configured. We didn’t need any of that. With Puppet, the dependency list looked like[7]:



44 packages later…



With Docker, we were able to replace all of that with:


atomic install kollaglue/centos-rdo-<service>


We were able to use a majority of the existing environment, but now starting services was significantly simplified.

Unfortunately, we were unable to get results for some time because we struggled to deploy a bare metal Red Hat Enterprise Linux Atomic Host instance. After consulting Lucas Gomes on Red Hat’s Ironic (bare metal deployment service) team, we learned that there was an easier way to accomplish what we were trying to do. He pointed us in the direction of a new feature in Ironic that added support for full image deployment[8]. Although there was a bug in Ironic when using the new feature, we fixed it and started to see our Red Hat Enterprise Linux Atomic Host running. Now that we were past this, we could finally create images and add users, but Nova Compute and Libvirt didn’t work. The problem was that Red Hat Enterprise Linux Atomic Host wasn’t loading the kernel modules for kvm. On top of that, Libvirt needed proper permission to access /dev/kvm and wasn’t getting it.




chmod 660 /dev/kvm

            chown root:kvm /dev/kvm

echo “Starting libvirtd.”
exec /usr/sbin/libvirtd


Upon fixing these issues, we could finally spawn instances. Later, these changes were adopted by Kolla because they represented a unique case that could cause Libvirt to fail[9].

To summarize, we created a containerized OpenStack solution inside of the TripleO installer project, using the containers from the Kolla project. We mirrored the TripleO workflow by using the undercloud (management cloud) to deploy most of the core services in the overcloud (user cloud), but now those services are containerized. The services we used were Keystone, Glance, and Nova; with services like Neutron, Cinder, and Heat soon to follow. Our new solution uses Heat (the orchestration service) to deploy the containerized OpenStack services onto Red Hat Enterprise Linux Atomic Host, and has the ability to plug right into the TripleO-heat-templates. Normally, Puppet is used to deploy an overcloud, but now we’ve proven you can use containers. What’s really unique about this, is that now you can shop for your config in the Docker Registry instead of having to go through Puppet to setup your services. This allows for you to pull down a container where your services come with the configuration you need. Through our work, we have shown that containers are an alternative deployment method within TripleO that can simplify deployment and add choice about how your cloud is installed.

The benefits of using Docker in a regular application are the same as having your cloud run in containers; reliable, portable, and easy life cycle management. With containers, lifecycle management greatly improves TripleO’s existing solution. The upgrading and downgrading process of an OpenStack service becomes far simpler; creating faster turnaround times so that your cloud is always running the latest and greatest. Ultimately, this solution provides an additional method within TripleO to manage the cloud’s upgrades and downgrades, supplementing the solution TripleO currently offers.

Overall, integrating with TripleO works really well because OpenStack provides powerful services to assist in container deployment and management. Specifically, TripleO is advantageous because of services like Ironic (the bare metal provisioning service) and Heat (the orchestration service), which provide a strong management backbone for your cloud. Also, containers are an integral piece of this system, as they provide a simple and granular way to perform lifecycle management for your cloud. From my work, it is clear that the cohesive relationship between containers and TripleO can create a new and improved avenue to deploy the cloud in a unique way to implement get your cloud working the way that you see fit.

TripleO is a fantastic project, and with the integration of containers I’m hoping to energize and continue building the community around the project. Using our integration as a proof of the project’s capabilities, we have shown that using TripleO provides an excellent management infrastructure underneath your cloud that allows for projects to be properly managed and grow.


[1]          https://github.com/stackforge/kolla/commit/dcb607d3690f78209afdf5868dc3158f2a5f4722

[2]          https://docs.docker.com/reference/commandline/cli/#restart-policies

[3]          https://github.com/stackforge/kolla/blob/master/docker/nova-compute/nova-compute-data/Dockerfile#L4-L5

[4]          https://www.rdoproject.org/Deploying_RDO_using_Instack

[5]          http://www.projectatomic.io/

[6]          https://github.com/rabi/heat-templates/blob/boot-config-atomic/hot/software-config/heat-docker-agents/Dockerfile

[7]          http://git.openstack.org/cgit/openstack/TripleO-puppet-elements/tree/elements/puppet-modules/source-repository-puppet-modules

[8]          https://blueprints.launchpad.net/ironic/+spec/whole-disk-image-support

[9]          https://github.com/stackforge/kolla/commit/08bd99a50fcc48539e69ff65334f8e22c4d25f6f

Survey: OpenStack users value portability, support, and complementary open source tools

by ghaff — June 8, 2015

75 percent of the respondents in a recent survey [1] conducted for Red Hat said that being able to move OpenStack workloads to different providers or platforms was important (ranked 4 or 5 out of 5)–and a mere 5 percent said that this question was of least importance. This was just one of the answers that highlighted a general desire to avoid proprietary solutions and lock-in.

For example, a minority (47 percent) said that differentiated vendor-specific management and other tooling was important while a full 75 percent said that support for complementary open source cloud management, operating system, and development tools was. With respect to management specifically, only 22 percent plan to use vendor-specific tools to manage their OpenStack environments. By contrast, a majority (51 percent) plan to use the tools built into OpenStack–in many cases complemented by open source configuration management (31 percent) and cloud management platforms (21 percent). It’s worth noting though that 42 percent of those asked about OpenStack management tools said that they were unsure/undecided, indicating that there’s still a lot of learning to go on with respect to cloud implementations in general.

This last point was reinforced by the fact that 68 percent said that the availability of training and services from the vendor to on-ramp their OpenStack project was important. (Red Hat offers a Certified System Administrator in Red Hat OpenStack certification as well as a variety of solutions to build clouds through eNovance by Red Hat.) 45 percent also cited lack of internal IT skills as a barrier to adopting OpenStack. Other aspects of commercial support were valued as well. For example, 60 percent said that hardware and software certifications are important and a full 82 percent said that production-level technical support was.

Read the full post »

OPNFV Arno hits the streets

by Dave Neary, NFV/SDN Community Strategist, Red Hat — June 5, 2015

The first release of the OPNFV project, Arno, is now available. The release, named after the Italian river which flows through the city of Florence on its way to the Mediterranean Sea, is the result of significant industry collaboration, starting from the creation of the project in October 2014.

This first release establishes a strong foundation for us to work together to create a great platform for NFV. We have multiple hardware labs, running multiple deployments of OpenStack and OpenDaylight, all deployed with one-step, automated deployment tools. A set of automated tests validate that deployments are functional, and provide a framework for the addition of other tests in the future. Finally, we have a good shared understanding of the problem space, and have begun to engage with upstream projects like OpenDaylight and OpenStack to communicate requirements and propose feature additions to satisfy them.

A core value of OPNFV is “upstream first” – the idea that changes required to open source projects for NFV should happen with the communities in those projects. This is a core value for Red Hat too, and we have been happy to take a leadership role in coordinating the engagement of OPNFV members in projects like OpenDaylight and OpenStack. Red Hat engineers Tim Rozet and Dan Radez have taken a leadership role in putting together one of the two deployment options for OPNFV Arno, the Foreman/Quickstack installer, based on CentOS, RDO and OpenDaylight packages created by another Red Hat engineer, Daniel Farrell. We have been proud to play a significant part, with other members of the OPNFV community, in contributing to this important mission.

Read the full post »

Public vs Private, Amazon compared to OpenStack

by Jonathan Gershater — May 13, 2015

Public vs Private, Amazon Web Services EC2 compared to OpenStack®

How to choose a cloud platform and when to use both

The public vs private cloud debate is a path well trodden. While technologies and offerings abound, there is still confusion among organizations as to which platform is suited for their agile needs. One of the key benefits to a cloud platform is the ability to spin up compute, networking and storage quickly when users request these resources and similarly decommission when no longer required. Among public cloud providers, Amazon has a market share ahead of Google, Microsoft and others. Among private cloud providers, OpenStack® presents a viable alternative to Microsoft or VMware.

This article compares Amazon Web Services EC2 and OpenStack® as follows:

  • What technical features do the two platforms provide?
  • How do the business characteristics of the two platforms compare?
  • How do the costs compare?
  • How to decide which platform to use and how to use both

OpenStack® and Amazon Web Services (AWS) EC2 defined

From  OpenStack.org “OpenStack software controls large pools of compute, storage, and networking resources throughout a datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.”

From AWS “Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers..”

Technical comparison of OpenStack® and AWS EC2

The tables below name and briefly describe the feature in OpenStack® and AWS. 

Read the full post »

The Age of Cloud File Services

by Sean Cohen, Principal Technical Product Manager, Red Hat — May 11, 2015

The new OpenStack Kilo upstream release that became available on April 30, 2015 marks a significant milestone for the Manila project for shared file system service for OpenStack with an increase in development capacity and extensive vendors adoption. This project was kicked off 3 years ago and became incubated during 2014 and now moves to the front of the stage at the upcoming OpenStack Vancouver Conference taking place this month with customer stories of Manila deployments in Enterprise and Telco environments.

storage-roomThe project was originally sponsored and accelerated by NetApp and Red Hat and has established a very rich community that includes code contribution fromcompanies such as EMC, Deutsche Telekom, HP, Hitachi, Huawei, IBM, Intel, Mirantis and SUSE.

The momentum of cloud shared file services is not limited to the OpenStack open source world. In fact, last month at the AWS Summit in San Francisco, Amazon announced it new Shared File Storage for Amazon EC2, The Amazon Elastic File System also known for EFS. This new storage service is an addition to the existing AWS storage portfolio, Amazon Simple Storage Service (S3) for object storage, Amazon Elastic Block Store (EBS) for block storage, and Amazon Glacier for archival, cold storage.

The Amazon EFS provides a standard file system semantics and is based on NFS v4 that allows the EC2 instances to access file system at the same time, providing a common data source for a wide variety of workloads and applications that are shared across thousands of instances. It is designed for broad range of use cases, such as Home directories, Content repositories, Development environments and big data applications. Data uploaded to EFS is automatically replicated to different availability zones, and because EFS file systems are SSD-based, there should be few latency and throughput related problems with the service. The Amazon EFS file system as a service allows users to create and configure file systems quickly with no minimum fee or setup cost, and customers pay only for the storage used by the file system based on elastic storage capacity that automatically grows and shrinks when adding and removing files on demand.

Read the full post »

What’s Coming in OpenStack Networking for the Kilo Release

by Nir Yechiel

KiloOpenStack  Kilo, the 11th release of the open source project, was officially released in April, and now is a good time to review some of the changes we saw in the OpenStack Networking (Neutron) community during this cycle, as well as some of the key new networking features introduced in the project.

Scaling the Neutron development community

The Kilo cycle brings two major efforts which are meant to better expand and scale the Neutron development community: core plugin decomposition and advanced services split. These changes should not directly impact OpenStack users but are expected to reduce code footprint, improve feature velocity, and ultimately bring faster innovation speed. Let’s take a look at each individually:

Neutron core plugin decomposition

Neutron, by design, has a pluggable architecture which offers a custom backend implementation of the Networking API. The plugin is a core piece of the deployment and acts as the “glue” between the logical API and the actual implementation. As the project evolves, more and more plugins were introduced, coming from open-source projects and communities (such as Open vSwitch and OpenDaylight), as well as from various vendors in the networking industry (like Cisco, Nuage, Midokura and others). At the beginning of the Kilo cycle, Neutron had dozens of plugins and drivers span from core plugins, ML2 mechanism drivers, L3 service plugins, and L4-L7 service plugins for FWaaS, LBaaS and VPNaaS – the majority of those included directly within the Neutron project repository. The amount of code required to review across those drivers and plugins was growing to the point where it was no longer scaling. The expectation that core Neutron reviewers review code which they had no knowledge of, or could not test due to lack of proper hardware or software setup, was not realistic. This also caused some frustration among the vendors themselves, who sometimes failed to get their plugin code merged on time.

Read the full post »

Driving in the Fast Lane – CPU Pinning and NUMA Topology Awareness in OpenStack Compute

by Steve Gordon, Sr. Technical Product Manager, Red Hat — May 5, 2015

The OpenStack Kilo release, extending upon efforts that commenced during the Juno cycle, includes a number of key enhancements aimed at improving guest performance. These enhancements allow OpenStack Compute (Nova) to have greater knowledge of compute host layout and as a result make smarter scheduling and placement decisions when launching instances. Administrators wishing to take advantage of these features can now create customized performance flavors to target specialized workloads including Network Function Virtualization (NFV) and High Performance Computing (HPC).

What is NUMA topology?

Historically, all memory on x86 systems was equally accessible to all CPUs in the system. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA).

In modern multi-socket x86 systems system memory is divided into zones (called cells or nodes) and associated with particular CPUs. This type of division has been key to the increasing performance of modern systems as focus has shifted from increasing clock speeds to adding more CPU sockets, cores, and – where available – threads. An interconnect bus provides connections between nodes, so that all CPUs can still access all memory. While the memory bandwidth of the interconnect is typically faster than that of an individual node it can still be overwhelmed by concurrent cross node traffic from many nodes. The end result is that while NUMA facilitates faster memory access for CPUs local to the memory being accessed, memory access for remote CPUs is slower.

Read the full post »

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Walking Through the Implementation

by Itzik Brown, QE Engineer focusing on OpenStack Neutron, Red Hat — April 29, 2015
and Nir Yechiel

In the previous blog post in this series we looked at what single root I/O virtualization (SR-IOV) networking is all about and we discussed why it is an important addition to Red Hat Enterprise Linux OpenStack Platform. In this second post we would like to provide a more detailed overview of the implementation, some thoughts on the current limitations, as well as what enhancements are being worked on in the OpenStack community.

Note: this post does not intend to provide a full end to end configuration guide. Customers with an active subscription are welcome to visit the official article covering SR-IOV Networking in Red Hat Enterprise Linux OpenStack Platform 6 for a complete procedure.


Setting up the Environment

In our small test environment we used two physical nodes: one serves as a Compute node for hosting virtual machine (VM) instances, and the other serves as both the OpenStack Controller and Network node. Both nodes are running Red Hat Enterprise Linux 7.

Read the full post »

OpenStack Summit Vancouver: Agenda Confirms 40+ Red Hat Sessions

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — April 2, 2015

As this Spring’s OpenStack Summit in Vancouver approaches, the Foundation has now posted the session agenda, outlining the final schedule of events. I am very pleased to report that Red Hat and eNovance have more than 40 approved sessions that will be included in the weeks agenda, with a few more approved as joint partner sessions, and even a few more as waiting alternates.

This vote of confidence confirms that Red Hat and eNovance continue to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in and concerned with.

Red Hat is also a headline sponsor in Vancouver this Spring, along with Intel, SolidFire, and HP, and will have a dedicated keynote presentation, along with the 40+ accepted sessions. To learn more about Red Hat’s accepted sessions, have a look at the details below. Be sure to visit us at the below sessions and at our booth (#H4). We look forward to seeing you in Vancouver in May!

For more details on each session, click on the title below:

Read the full post »

An ecosystem of integrated cloud products

by Jonathan Gershater — March 27, 2015

In my prior post, I described how OpenStack from Red Hat frees  you to pursue your business with the peace of mind that your cloud is secure and stable. Red Hat has several products that enhance OpenStack to provide cloud management, virtualization, a developer platform, and scalable cloud storage.

Cloud Management with Red Hat CloudForms            

CloudForms contains three main components

  • Insight – Inventory, Reporting, Metrics red-hat-cloudforms-logo
  • Control – Eventing, Compliance, and State Management
  • Automate – Provisioning, Reconfiguration, Retirement, and Optimization

Read the full post »

An OpenStack Cloud that frees you to pursue your business

by Jonathan Gershater — March 26, 2015

As your IT evolves toward an open, cloud-enabled data center, you can take advantage of OpenStack’s benefits: broad industry support, vendor neutrality, and fast-paced innovation.

As you move into implementation, your requirements for an OpenStack solutions shares a familiar theme: enterprise-ready, fully supported, and seamlessly-integrated products.

Can’t we just install and manage OpenStack ourselves?

OpenStack is an open source project and freely downloadable. To install and maintain OpenStack you need to recruit and retain engineers trained in Python and other technologies. If you decide to go it alone consider:

  1. How do you know OpenStack works with your hardware?
  2. Does OpenStack work with your guest instances?
  3. How do you manage and upgrade OpenStack?
  4. When you encounter problems, consider how you would solve them? Some examples:

Read the full post »

Co-Engineered Together: OpenStack Platform and Red Hat Enterprise Linux

by Arthur Berezin — March 23, 2015

OpenStack is not a software application that just runs on top of any random Linux. OpenStack is tightly coupled to the operating system it runs on and choosing the right Linux  operating system, as well as an OpenStack platform, is critical to provide a trusted, stable, and fully supported OpenStack environment.

OpenStack is an Infrastructure-as-a-Service cloud management platform, a set of software tools, written mostly in Python, to manage hosts at large scale and deliver an agile, cloud-like infrastructure environment, where multiple virtual machine Instances, block volumes and other infrastructure resources can be created and destroyed rapidly on demand.

ab 1 Read the full post »

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

by Nir Yechiel — March 5, 2015

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

Red Hat Enterprise Linux OpenStack Platform 6 introduces support for single root I/O virtualization (SR-IOV) networking. This is done through a new SR-IOV mechanism driver for the OpenStack Networking (Neutron) Modular Layer 2 (ML2) plugin, as well as necessary enhancements for PCI support in the Compute service (Nova).

In this blog post I would like to provide an overview of SR-IOV, and highlight why SR-IOV networking is an important addition to RHEL OpenStack Platform 6. We will also follow up with a second blog post going into the configuration details, describing the current implementation, and discussing some of the current known limitations and expected enhancements going forward.

Read the full post »

A Closer Look at RHEL OpenStack Platform 6

by Steve Gordon, Sr. Technical Product Manager, Red Hat — February 24, 2015

Last week we announced the release of Red Hat Enterprise Linux OpenStack Platform 6, the latest version of our cloud solution providing a foundation for production-ready cloud. Built on Red Hat Enterprise Linux 7 this latest release is intended to provide a foundation for building OpenStack-powered clouds for advanced cloud users. Lets take a deeper dive into some of the new features on offer!

IPv6 Networking Support

IPv6 is a critical part of the promise of the cloud. If you want to connect everything to the network, you better plan for massive scale and have enough addresses to use. IPv6 is also increasingly important in the network functions virtualization (NFV) and telecommunication service provider space.

This release introduces support for IPv6 address assignment for tenant instances including those that are connected to provider networks; while IPv4 is more straight forward when it comes to IP address assignment, IPv6 offers some more flexibility and options to choose from. Both stateful and stateless DHCPv6 are supported, as well as the ability to use Stateless Address Autoconfiguration (SLAAC).

Read the full post »

Accelerating OpenStack adoption: Red Hat Enterprise Linux OpenStack Platform 6!

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — February 19, 2015

On Tuesday February 17th, we announced the general availability of Red Hat Enterprise Linux OpenStack Platform 6, Red Hat’s fourth release of the commercial OpenStack offering to the market.

Based on the community OpenStack “Juno” release and co-engineered with Red Hat Enterprise Linux 7, the enterprise-hardened Version 6 is aimed at accelerating the adoption of OpenSack among enterprise businesses, telecommunications companies, Internet service providers (ISPs), and public cloud hosting providers.

Since the first version released in July 2013, the “design principles” of Red Hat Enterprise Linux OpenStack Platform product offering are:

Read the full post »

Red Hat Enterprise Virtualization 3.5 transforms modern data centers that are built on open standards

by Raissa Tona, Principal Product Marketing Manager, Red Hat — February 13, 2015

This week we announced the general availability of Red Hat Enterprise Virtualization 3.5. Red Hat Enterprise Virtualization 3.5 allows organizations to deploy an IT infrastructure that services traditional virtualization workloads while building a solid base for modern IT technologies.

Because of its open standards roots, Red Hat Enterprise Virtualization 3.5 enables IT organizations to more rapidly deliver and deploy transformative and flexible technology services in 3 ways:

  • Deep integration with Red Hat Enterprise Linux
  • Delivery of standardized services for mission critical workloads
  • Foundation for future looking, innovative, and highly flexible cloud enabled workloads built on OpenStack

Deep integration with Red Hat Enterprise Linux

Red Hat Enterprise Virtualization 3.5 is co-engineered with Red Hat Enterprise Linux including the latest version, Red Hat Enterprise Linux 7, which is built to meet modern data center and next-generation IT requirements. Due to this tight integration, Red Hat Enterprise Virtualization 3.5 inherits the innovation capabilities of the world’s leading enterprise Linux platform.

Read the full post »


Get every new post delivered to your Inbox.

Join 77 other followers