Public vs Private, Amazon compared to OpenStack

by Jonathan Gershater — May 13, 2015

Public vs Private, Amazon Web Services EC2 compared to OpenStack®

How to choose a cloud platform and when to use both

The public vs private cloud debate is a path well trodden. While technologies and offerings abound, there is still confusion among organizations as to which platform is suited for their agile needs. One of the key benefits to a cloud platform is the ability to spin up compute, networking and storage quickly when users request these resources and similarly decommission when no longer required. Among public cloud providers, Amazon has a market share ahead of Google, Microsoft and others. Among private cloud providers, OpenStack® presents a viable alternative to Microsoft or VMware.

This article compares Amazon Web Services EC2 and OpenStack® as follows:

  • What technical features do the two platforms provide?
  • How do the business characteristics of the two platforms compare?
  • How do the costs compare?
  • How to decide which platform to use and how to use both

OpenStack® and Amazon Web Services (AWS) EC2 defined

From “OpenStack software controls large pools of compute, storage, and networking resources throughout a datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.”

From AWS “Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers..”

Technical comparison of OpenStack® and AWS EC2

The tables below name and briefly describe the feature in OpenStack® and AWS. 


 Why you need it?

To run an application you need a server with CPU, memory and storage, with or without pre-installed operating systems and applications.




Compute is virtual machines/servers




How much memory and CPU and temporary (ephemeral) storage is assigned to the instances/VM.

Flavors: Variety of sizes: micro, small, medium, large etc.

Variety of sizes: micro, small, medium, large etc.

Operating systems offered

What operating systems does the cloud offer to end-users

Whatever operating systems the cloud administrators host on the OpenStack cloud. (Red Hat certifies Microsoft Windows, RHEL and SUSE)

AMIs provided by the AWS marketplace.


A base configuration of a virtual machine, from which other virtual machines can be created.

Catalogs of virtual machine images can be created from which users can select a virtual machine.


OpenStack administrators upload images and create catalogs for users.

Users can upload their own images.

(AMI) Amazon Machine Image

AWS provides an online marketplace of pre-defined images.

Users can upload their own images.


 Why you need it?

To network virtual servers to each otherYou also need to control who can access the server. You want to protect/firewall the server especially if it is exposed to the Internet.




Networking provides connectivity for users to virtual machines. Connects virtual machines to one another and to external networks (the Internet).  



A private IP address internal only and non-routable to the Internet

Every virtual instance is automatically assigned a private IP address, typically using DHCP.

AWS allocates a private IP address for the instance using DHCP.

Public IP address

A floating IP is a public IP address, that you can dynamically add to a running virtual instance.

AWS public IP address is mapped to the primary private IP address.

Networking service

You can create networks and networking functions, eg. L3 forwarding, NAT, edge firewalls, and IPsec VPN.

Virtual routers or switches can be added if you use AWS VPC, a virtual public cloud.

Load Balance VM traffic

OpenStack LBaaS (Load Balancing as a Service) balances traffic from one network to application services.

ELB (Elastic Load Balancing) automatically distributes incoming application traffic across Amazon EC2 instances.


Manage the DNS entries for your virtual servers and web applications.

The OpenStack DNS project (Designate) is in “incubation” and is not part of core OpenStack (as of the April 2015 Kilo release).

Route 53 –  AWS’s DNS service.


A method of device virtualization that provides higher I/O performance and lower CPU utilization compared to traditional implementations.

Each SR-IOV port is associated with a virtual function (VF). SR-IOV ports may be provided by Hardware-based Virtual Ethernet Bridging or they may be extended to an upstream physical switch (IEEE 802.1br).

AWS support enhanced networking capabilities using SR-IOV, provides higher packet per second (PPS) performance, lower inter-instance latencies, and very low network jitter.


Why you need it?

You get insight into usage patterns and utilization of the physical and virtual resources. You may want to account for individual usage and optionally bill users for their usage.




Monitoring provides metering and usage of the cloud.  



System-wide metering and usage.

Option to bill users for their usage

To collect measurements of the utilization of the physical and virtual resources comprising deployed clouds.

Persist data for subsequent retrieval and analysis, and trigger actions when defined criteria are met.

Monitoring service for AWS cloud resources and the applications  on AWS.

Collect and track metrics, collect and monitor log files, and set alarms.


Why you need it?

You need the  option of public key cryptography for SSH and password decryption. You want to firewall virtual machines to only allow certain traffic in (ingress) or out (egress).




Control access to your virtual machines.  

Keypairs, security groups.

Keypairs, security groups.

Key pairs

To login to your VM or instance, you must create a key pair.

Linux: used to SSH.

Windows: used to decrypt the Administrator password.

When you launch a virtual machine, you can inject a key pair, which provides SSH access to your instance.

To log in to your instance, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance.

Assign and control access to VM instances.

A security group is a named collection of network access rules that limit the traffic that access an instance.

When you launch an instance, you can assign one or more security groups to it.




 Why you need it?

You want to govern who can access your cloud. You can manage permissions to cloud resources. You may want to offer multi-factor authentication for stronger security.




Authentication and authorization methods for controlling access to virtual servers, storage and other resources in the cloud.  

Integrates with an external provider, example LDAP or AD.


IAM Identity and Access Management


 Why you need it?

 Block storage

  • Assign virtual drives/volumes to virtual servers to grow their storage capacity, beyond the boot volume.
  • Snapshots and backups of virtual servers.

 Object storage 

  • Store objects such as files, media, images



Object storage

Store files: media, documents, images etc


S3Simple Storage Service

Block storage

Create virtual disk drives (volumes). 


EBSElastic Block Storage


Why you need it?

Your cloud users can use a database service without installing and configuring their own database.






Relational Database

MySQL, PostgresSQL

Users get an instance of MYSQL or Oracle 11g.

Non Relational Database

Cassandra, Couchbase, MongoDB

Amazon SimpleDB Users store data pairs into a simple database suitable for heavy read applications.


 Why you need it?

This allows repeatable copies of an application to be made.




Allows developers to store the requirements of a cloud application in a file or template that defines resources (virtual machines, networks, storage, security, templates, images etc) necessary for the application to run.


Cloud Formation

 Big data / parallel processing

 Why you need it?

The cloud can provide the infrastructure for you to perform large scale data processing.




Allows you to perform large scale parallel processing of data, example Hadoop


EMR (Elastic Map Reduce)





The cloud can buffer and move data between applications and VMs/instances on a hosted queue.


(not released yet)

SQS – (Simple Queue Service)

Graphical User Interface (GUI) dashboard

 Why you need it?

You can administer your cloud or users can self-serve their needs, from any compliant browser.




Browser to manage or self serve needs for compute, networking and storage.



Command Line Interface (CLI)

Why you need it?

You can automate and script the administration and use/consumption of your cloud from the command line.




The command line interface provides administrators with commands to provision and de-provision cloud resources (virtual machines, storage, networking)



Business level components


Why you need it?

To segregate users by business unit, department or organization to meet legal requirements or to set quota on resources.




A tenant is a group of users who share common access to infrastructure (the cloud platform) with other users. Users are segregated. 

Project / tenant. Quota of compute resources can be defined for each project/tenant.

Segregation is achieved using AWS VPC (Virtual Private Cloud)

SLA (Service Level Agreement)

Why you need it?

To run mission critical applications with minimal downtime you need an SLA from your cloud provider.




An SLA is a guarantee of availability of the cloud.

An SLA is negotiated between the provider of the OpenStack private cloud (internal IT department / managed service provider) and the business units who consume the private cloud.


Ownership and control of data

 Why you need to know?

Users should know who can access data stored in the cloud. Legal regulations for industries such as healthcare, financial services, government etc stipulate who should have access to applications and data. Some users/countries fear that government security and spying agencies can gain access to public cloud data.




When you store applications and data in the cloud who owns the data and who has access to it.

The  users of the OpenStack cloud

The user owns the data. See AWS agreement (section 8)


 Why you need to know?

You may need help from consultants and community peers to use a private or public cloud. If you deploy a private OpenStack cloud, the community of software and hardware vendors that are certified with your OpenStack vendor give you the assurance that problems can be resolved. (see my prior post for a supported OpenStack deployment.)




An ecosystem includes hardware vendors, software vendors, a community of peers (developers, users, administrators) and consultants to enable a cloud to run.

OpenStack’s ecosystem: hardware, software and service providers and end users.

OpenStack code which runs the cloud is open source for users to contribute.

Amazon’s ecosystem of consultants and ISVs assist users to use the AWS.

The AWS code which runs the cloud is closed source.

High availability

 Why you need to know?

If a cloud offers high availability, then applications hosted on the cloud can fail over and users will experience less interruption of service.




Regions and Availability Zones.

Data and instances can be stored in different geographical regions for redundancy, latency or legal requirements.
 Amazon EC2 is hosted in multiple locations world-wide, composed of regions (a separate geographic area). Each region has multiple, isolated locations known as Availability Zones.


Why you need to know?

The cost of running servers and applications in a cloud can be operational (OPEX) or capital (CAPEX).




The cost of using a cloud service.

Use a managed service offering


Buy hardware to run an OpenStack cloud.


Freely download OpenStack software and employ engineers to install, maintain, enhance, upgrade etc. This cost model can be difficult to estimate because of the cost of employees required to run the cloud. How many engineers do you need? How do you know when to hire more? How do you reduce the size of your workforce if the demand for your cloud decreases?


License a distribution from a vendor. This involves an upfront license cost, annual support costs and a subsequent license renewal.


Purchase a predictable subscription from Red Hat and receive support, maintenance, consulting, upgrades….

Billing by the minute/hour – potentially unpredictable costs as usage is billed as used.

Pre-purchase blocks of usage at other rates:reserved instance or spot pricing.

So which do you use?

Since both cloud platforms provide some similar services, you should consider your needs. For instant and temporary needs, AWS and its on-demand pricing model could suffice. For longer term projects AWS lists examples, as does OpenStack.

I believe it boils down to use cases. AWS lists use cases and Gartner recommends using OpenStack for:

  • DevOps-style software development. Developers can access the OpenStack API and work with infrastructure as code.”
  • For development/testing support. …scenario of a more traditional IaaS with a self-service portal for the developers and testing groups.
  • High-performance computing/grid computing is a potential use case for OpenStack because many of these environments are implemented with open-source components, and OpenStack is well-suited to support the flexible infrastructure provisioning required in these environments.”
  • Scale-out commodity infrastructure to support big data technologies such as Hadoop, Apache Spark and Apache Cassandra.”
  • “line-of-business application hosting…..Focusing on the emerging cloud-native applications, rather than trying to chase legacy compatibility, is the scenario used by most IaaS private cloud implementers.”

How to use AWS and OpenStack?

A hybrid cloud is a combination of an on-premise private cloud and a public cloud. A cloud management platform provides tools to administer both cloud environments. Red Hat offers an Open Hybrid Cloud, “A single-subscription offering that lets you build and manage an open, private Infrastructure-as-a-Service (IaaS) cloud and ease your way into a highly scalable, public-cloud-like infrastructure based on OpenStack®.”

The Age of Cloud File Services

by Sean Cohen, Principal Technical Product Manager, Red Hat — May 11, 2015

The new OpenStack Kilo upstream release that became available on April 30, 2015 marks a significant milestone for the Manila project for shared file system service for OpenStack with an increase in development capacity and extensive vendors adoption. This project was kicked off 3 years ago and became incubated during 2014 and now moves to the front of the stage at the upcoming OpenStack Vancouver Conference taking place this month with customer stories of Manila deployments in Enterprise and Telco environments.

storage-roomThe project was originally sponsored and accelerated by NetApp and Red Hat and has established a very rich community that includes code contribution fromcompanies such as EMC, Deutsche Telekom, HP, Hitachi, Huawei, IBM, Intel, Mirantis and SUSE.

The momentum of cloud shared file services is not limited to the OpenStack open source world. In fact, last month at the AWS Summit in San Francisco, Amazon announced it new Shared File Storage for Amazon EC2, The Amazon Elastic File System also known for EFS. This new storage service is an addition to the existing AWS storage portfolio, Amazon Simple Storage Service (S3) for object storage, Amazon Elastic Block Store (EBS) for block storage, and Amazon Glacier for archival, cold storage.

The Amazon EFS provides a standard file system semantics and is based on NFS v4 that allows the EC2 instances to access file system at the same time, providing a common data source for a wide variety of workloads and applications that are shared across thousands of instances. It is designed for broad range of use cases, such as Home directories, Content repositories, Development environments and big data applications. Data uploaded to EFS is automatically replicated to different availability zones, and because EFS file systems are SSD-based, there should be few latency and throughput related problems with the service. The Amazon EFS file system as a service allows users to create and configure file systems quickly with no minimum fee or setup cost, and customers pay only for the storage used by the file system based on elastic storage capacity that automatically grows and shrinks when adding and removing files on demand.

The recent Amazon EFS Preview announcement is joining another commercial Cloud File Service Preview: The Microsoft Azure File Service that was announced last year at TechEd 2014: The Azure File service exposes file shares using the standard SMB 2.1 protocol that is currently limited to CIFS only. Applications and workloads can share files between VMs using standard file system APIs, along to a REST interface, which opens a variety of hybrid scenarios. The Azure Files is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into its platform. From a use cases point of view, Azure Files makes it easier to “lift and shift” applications to the cloud that use on-premise file shares to share data between parts of the application.

Meet OpenStack Manila

Manila is a community-driven “Shared Filesystems as a service” project for OpenStack that aims to provide a set of services for management of shared filesystems across OpenStack Compute instances in a multi-tenant cloud environment, alongside the existing OpenStack Cinder block storage and Swift object storage services.

Screen Shot 2015-05-11 at 3.41.01 PM

Manila has a pluggable infrastructure framework that provides a vendor neutral management API that allows for provisioning and attaching different shared file systems. In fact, it has support for multiple protocols such as GlusterFS, GPFS, HDFS and ZFS, and backend driver implementations. It provides provisioning and management in a “Distributed file system for cloud” fashion for VMs and physical hardware. Different from AWS EFS, Manila is not the actual shared file system rather than the control plane that can, for example, provide access to an existing CIFS share or create a new NFS export and map it between VM specific instances, where the service itself is not in the the data path.

Manila Use cases

Here are some key use cases that the Manila project can help to address:

  • Replace home-grown NAS provisioning tools
  • Support traditional enterprise applications
  • On-Demand development and build environments
  • Integration with existing automation frameworks through REST API or CLI
  • Support “cloud-native” workloads, such DBaaS
  • Big Data – via Manila’s HDFS native driver plugin
  • Provide a secure cross-tenant file sharing
  • Hybrid Cloud shares (external consumption of shares / migration of workloads to the cloud from on-premise file shares)

Behind The Scenes

The Manila service architecture consists of the following key components:

  • manila-api – service that provides a stable RESTful API. The service authenticates and routes requests throughout the Shared Filesystem service.Manila Arch - Sean Cohen.001
  • python-manilaclient – Command line interface to interact with Manila via manila-api and also a Python module to interact programmatically with Manila.
  • manila-scheduler – Responsible for scheduling/routing requests to the appropriate manila-share service. It does that by picking one back-end while filtering all except one back-end.
  • manila-share – Responsible for managing Shared File Service devices, specifically the back-end devices.
  • Auth Manager – Component that is responsible for users/projects/and roles.
  • SQL database – Manila uses a sql-based central database that is shared among Manila services in the system.

Manila has a “share network” notion – where share_network is a tenant-defined object that informs Manila about the security and network configuration for a group of shares. Manila also has a notion of a “Security Service” that is a set of options that defines a security domain for a particular shared file system protocol, such as an Active Directory domain, LDAP or a Kerberos domain. The security_service contains the information necessary for Manila to create a server that joins the given domain. Manila requires the user by default to create a share network, after the creation of the share network, the user can proceed to create the shares. Users in Manila can configure multiple back-ends just like Cinder. Manila has a share server assigned to every tenant. Similar to Cinder, it uses Intelligent scheduling of Shares using Filter Scheduler and Multi-backend support. Support for shares in filter scheduler allows the cloud administrator to manage large-scale shared storage by filtering backends based on predefined parameters.

Manila offers a full lifecycle shares management where tenants create, delete, list, get details, snapshot, and modify access for shares, coordinate mounting and unmounting of file system shares via the Horizon dashboard or the REST interface. It uses share access rules (ACL) to define which clients can access the shares within a single tenant space. It also supports full multi-tenancy so that drivers for storage controllers with support for secure multi-tenancy are able to automatically create virtual instances of storage servers in a multi-tenant configuration. (Manila supports the following Access Control types: IP address, User name or SSL certificate based)


Manila admins can create a unique “share-types” – where a share_type is an administrator-defined “type of service”, comprised of a tenant visible description, and a list of non-tenant-visible key/value pairs (extra_specs) which the Manila scheduler uses to make provisioning decisions for each share request based on the capacity and capabilities of the storage made available to Manila (similar to Cinder scheduler), where it is up to the vendor driver to publish the backend capabilities back to the scheduler to match the names of the extra_specs. In fact, just like Cinder, this is where the different vendors can expose more unique capabilities such as deduplication or compression for example.

Manila’s Momentum

The new OpenStack Kilo upstream release, that became available on April 30th, 2015, marks significant momentum for the Manila project with its increased development capacity that was able to deliver more than 40 new blueprints that were implemented in this cycle. The Kilo release also brings a large increase of new and diverse ecosystem vendors drivers, such as EMC Isilon, Hitachi Scale-out-Platform, HDFS, HP 3PAR, Huawei V3 Storage, Oracle ZFS Storage Appliance & Quobyte.

This new Manila drivers velocity also raises the need for a Manila Third-Party Continuous integration (CI) testing that will require the vendors who wish to submit their drivers to setup a CI system in their labs to make sure their driver passes Tempest tests for every new Manila commit in order to maintain stability, but it also raises the need for more drivers interoperability and certifications testing.

Another notable addition to this release that was led by Red Hat is the new gateway mediated networking model with NFS-Ganesha that opens the door to allow vendor drivers support for multi-protocols such as NFSv2, v3, v4, v4.1, v4.2 & pNFS to enrich the Manila service catalog.

Using GlusterFS via NFS-Ganesha, the Manila file shares are abstracted from underlying hardware and can be grown, shrunk, or migrated across physical systems as necessary. Storage servers can be added or removed from the system dynamically with data rebalanced across the trusted pool of servers, where the data is always online – this addresses the File Share Elasticity required to provide a Scale-out / Scale-down NAS on demand. It also delivers File Share Availability as GlusterFS enables you to replicate the whole storage system between different data centers and across geographic locations.

A noteworthy improvement in Kilo was made around the networking model for Manila. The complexity involved in Manila networking has been vastly simplified enabling more options for deployment. (until Kilo release, it was pretty much Neutron and a share server for every tenant with L2 networking as the main choices). In addition to core framework and driver improvements, several features such as pool aware scheduler, access level for shares and private shares have been implemented.

The Road Ahead

The Manila Liberty roadmap is already in progress and will be the focus of the Mania tracks in the upcoming OpenStack Liberty Design Summit in Vancouver. Features such as support for Share Migration, Data Replication, Quality of Service, Consistency Groups, Thin Provisioning and IPv6 enablement should take the OpenStack File Share service to the next level.

The Manila File Share service is already available in OpenStack RDO community-supported distribution (via packstack) since the Juno release and is slated to be introduced in the upcoming Red Hat Enterprise Linux OpenStack Platform 7 as a Tech Preview this summer.

Join us in the upcoming Manila sessions at the OpenStack Vancouver Conference:



Image source

What’s Coming in OpenStack Networking for the Kilo Release

by Nir Yechiel

KiloOpenStack  Kilo, the 11th release of the open source project, was officially released in April, and now is a good time to review some of the changes we saw in the OpenStack Networking (Neutron) community during this cycle, as well as some of the key new networking features introduced in the project.

Scaling the Neutron development community

The Kilo cycle brings two major efforts which are meant to better expand and scale the Neutron development community: core plugin decomposition and advanced services split. These changes should not directly impact OpenStack users but are expected to reduce code footprint, improve feature velocity, and ultimately bring faster innovation speed. Let’s take a look at each individually:

Neutron core plugin decomposition

Neutron, by design, has a pluggable architecture which offers a custom backend implementation of the Networking API. The plugin is a core piece of the deployment and acts as the “glue” between the logical API and the actual implementation. As the project evolves, more and more plugins were introduced, coming from open-source projects and communities (such as Open vSwitch and OpenDaylight), as well as from various vendors in the networking industry (like Cisco, Nuage, Midokura and others). At the beginning of the Kilo cycle, Neutron had dozens of plugins and drivers span from core plugins, ML2 mechanism drivers, L3 service plugins, and L4-L7 service plugins for FWaaS, LBaaS and VPNaaS – the majority of those included directly within the Neutron project repository. The amount of code required to review across those drivers and plugins was growing to the point where it was no longer scaling. The expectation that core Neutron reviewers review code which they had no knowledge of, or could not test due to lack of proper hardware or software setup, was not realistic. This also caused some frustration among the vendors themselves, who sometimes failed to get their plugin code merged on time.

The first effort to improve the situation was to decompose the core Neutron plugins and ML2 drivers from the Neutron repository. The idea is that plugins and drivers will leave only a small “shim” (or proxy) in the Neutron tree, and move all their backend logic out to a different repository, with StackForge being a natural place for that. The benefit is clear: Neutron reviewers can now focus on reviewing core Neutron code, while the vendors and plugin maintainers can now iterate at their own pace. While the specification encouraged vendors to immediately start the decomposition of their plugins, it did not require that all plugins complete decomposition in Kilo timeframe, mainly to allow enough time for the vendors to complete the process.

More information on the process is documented here, with this section dedicated for tracking the progress of the various plugins.

Advanced services split

While the first effort is focused solely on core Neutron plugins and ML2 drivers, a parallel effort was put in place to address similar concerns with the L4-L7 advanced services (FWaaS, LBaaS, and VPNaaS). Similar to the core plugins, advanced services previously stored their code in the main Neutron repository, resulting in lack of focus and reviews by Neutron core reviewers. Starting with Kilo, these services are now split into their own repositories; Neutron now includes four different repositories: one for basic L2/L3 networking, and one each for FWaaS, LBaaS, and VPNaaS. As the number of service plugins is still relatively low, vendors and plugin code will remain in each of the service repositories at this point.

It is important to note here that this change should not affect OpenStack users. Even with the services now split, there is no change to the API or CLI interfaces, and they are all still using the same Neutron client as before. That said, we do see this split laying the foundation for some more deeper changes in the future, with each of the services having the potential to become independent from Neutron and offer their own REST endpoints, configuration file, and CLI/API client. This will enable teams focused exclusively on one or more advanced services to make a bigger impact.

ML2/Open vSwitch port-security

Security-groups are one of the most popular Neutron features, allowing tenants to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a Neutron port, effectively creating a firewall close to the virtual machines (VMs).

As a security measure, Neutron’s security group implementation always applies “bonus” default rules automatically to block IP address spoofing attacks, preventing a VM from sending or receiving traffic with a MAC or IP address which does not belong to its Neutron port. While most users find security-groups and the default anti-spoofing rules helpful and necessary to protect their VMs, some asked for the option to turn it off for specific ports. This is mainly required in cases where network functions are running within the VMs, a common use case for network functions virtualization (NFV).

Think for example of a router application deployed within an OpenStack VM; it receives packets that are not necessarily addressed to it and transmits (routes) packets that are not necessarily generated from one of its ports. With security-groups applied, it will not be able to perform these tasks.

Let’s examine the following topology as an example:


Host 1, with the IPv4 address of wants to reach Host 2, which is configured with The two hosts are connected via two VMs running a router application, and are configured to route between the networks and act as default gateways for the hosts. The MAC addresses of the relevant ports are shown as well.Now, let’s examine the traffic flow when Host 1 is trying to send traffic to Host 2:

  1. Host 1 generates an IPv4 packet with a source of and a destination of As the two hosts are placed on different subnets, R1 will respond to Host 1’s ARP request with its local MAC address, and 3B-2D-B9-9B-34-40 will be used as the destination MAC on the L2 frame.
  1. R1 receives the packet. Note that the destination IP of the packet is, which is not assigned to R1. With security-groups enabled on R1’s port and the default anti-spoofing rules applied, the packet will be dropped at this point, and R1 would not be able to route the traffic further.

Prior to Kilo, you could either disable or enable security-groups for the entire cloud. Starting with the Kilo release, it is now possible to enable or disable the security-group feature per port using a new port-security-enabled attribute, so that the tenant admin can decide if and where a firewall is exactly needed in the topology. This new attribute is supported with the Open vSwitch agent (ovs-agent) in conjunction with the IptablesFirewallDriver.

Going back to the previous topology, it is now possible to turn off security-groups on the VMs being used as routers, while keeping them active on the Host VM ports, so that routing can take place properly:


Some additional information on this feature along with a configuration example can be found on this blog post by Red Hat’s Terry Wilson.

IPv6 enhancements

IPv6 has been a key area of focus in Neutron lately, with major features introduced during the Juno release to allow address assignment for tenant networks using Stateless Address Autoconfiguration (SLAAC) and DHCPv6, as well as support for provider networks with Router Advertisements (RAs) messages generated by an external router. While the IPv6 code base continues to mature, the Kilo release brings several other enhancements, including:

  • The ability to assign multiple IPv6 prefixes for a network

    • With IPv6, it is possible to assign several IP prefixes to a single interface. This is in fact a common configuration, with all interfaces assigned a link-local address (LLA) by default to handle traffic in the local link, and one or more global unicast addresses (GUA) for end-to-end connectivity. Starting with the Kilo release, users can now attach several IPv6 subnets to a network. When the subnet type is either SLAAC or DHCPv6 stateless, one IPv6 address from each subnet will be assigned to the Neutron port.
  • Better IPv6 router support

    • As of Kilo, there is no network address translation (NAT) or floating IP model for IPv6 in OpenStack. The assumption is that VMs are assigned globally routed addresses and can communicate directly using pure L3 routing. The neutron-l3-agent is the component responsible for routing within Neutron, through the creation and maintenance of virtual routers. When it comes to IPv6 support in the virtual routers, two main functions are required:
      1. Inter-subnet routing: this refers to the ability to route packets between different IPv6 prefixes of the same tenant. Since the traffic is routed within the cloud and does not leave for any external system this is usually referred to as “east-west” routing. This is supported since Juno with no major enhancements introduced in Kilo.
      2. External routing: this refers to the ability to route packets between an IPv6 tenant subnet and an IPv6 external subnet. Since the traffic needs to leave the cloud to reach the external network this is usually referred to as “north-south” traffic. As there is no IPv6 NAT support, the virtual router simply needs to route the traffic between the internal subnet and the external one. While this capability was supported since the Juno release, Kilo introduces major improvements to the way the operator needs to provision and create this external network to begin with. It is now not required to create any Neutron subnet for the external network. The virtual router can just automatically learn its default gateway information via SLAAC (if RAs are enabled on the upstream router), or the default route can be manually set by the operator using a new option introduced in the l3-agent configuration file (‘ipv6-gateway’).
  • Extra DHCP options

    • With Neutron, users can specify extra DHCP options for a subnet. This is mainly used to assign additional information such as Domain Name System (DNS) or maximum transmission unit (MTU) size to a given port. Originally, this configuration only worked for DHCPv4 or DHCPv6 addresses on a port. This configuration causes issues when setting up dual-stack designs, where a VM is assigned with both an IPv4 and IPv6 address on the same port.
    • Starting with Kilo, it is now possible to specify extra DHCP options for both DHCPv4 and DHCPv6. A new attribute (‘ip_version’) is used in Neutron port create/update API to specify the IP version (4 or 6) of a given DHCP option.

LBaaS v2 API

Load-Balancing-as-a-Service (LBaaS) is one of the advanced services of Neutron. It allows tenants to create load-balancers on demand, backed by open-source or proprietary service plugins that offer different load balancing technologies. The open source solution available with Red Hat Enterprise Linux OpenStack Platform is based on the HAProxy service plugin.

Version 1.0 of the LBaaS API included basic load balancing capabilities and established a simple, straight-forward flow for setting up a load balancer services.

  1. Create a pool
  2. Create one or more members in the pool
  3. Create health monitors
  4. Create a virtual IP (VIP) that is associated with the pool

This was useful for getting initial implementations and deployments of LBaaS, but it wasn’t intended to be an enterprise-class alternative to a full-blown load-balancer. LBaaS version 2.0 adds capabilities to offer a more robust load-balancing solution, including support for SSL/TLS termination. Accomplishing this required a redesign of the LBaaS architecture, along with the HAProxy reference plugin.

Distributed Virtual Routing (DVR) VLAN support

DVR, first introduced in Juno release, allows the deployment of Neutron routers across the Nova Compute nodes, so that each Compute node handles the routing for its locally hosted VMs. This is expected to result in better performance and scalability of the virtual routers, and is seen as an important milestone towards a more efficient L3 traffic flow in OpenStack.

As a reminder, the default OpenStack architecture with Neutron involves a dedicated cluster of Network nodes that handles most of the network services in the cloud, including DHCP, L3 routing, and NAT. That means that traffic from the Compute node must reach the Network nodes to get routed properly. With DVR, the Compute node can handle itself inter-subnet (east-west) routing as well as NAT for floating IPs. DVR still relies on dedicated Network nodes for the default SNAT service, which allows basic outgoing connectivity for VMs.

Prior to Kilo, distributed routers only supported overlay tunnel networks (i.e. GRE, VXLAN) for tenant separation. This hindered the adoption of the feature as many clouds opted to use 802.1Q VLAN tenant networks. With Kilo, this configuration is now possible and distributed routers may service tunnel networks as well as VLAN networks.

To get more information about DVR, I strongly recommended reading this great three part post from Red Hat’s Assaf Muller, covering: overview and east/west routing, SNAT, and Floating IPs.

View the state of Highly Available routers

One of the major features introduced in the Juno release was the L3 High Availability (L3 HA) solution, which allowed an active/active setup of the neutron-l3-agent across different Network nodes. The solution, based on keepalived, utilizes the Virtual Router Redundancy Protocol (VRRP) protocol internally for forming groups of highly available virtual routers. By design, for each group there is one active router (which forwards traffic), and one or more standby routers (which are waiting to take control in case of a failure of the active one). The scheduling of master/backup routers is done randomly across the different Network nodes, so that the load (i.e. forwarding router instances) is spread among all nodes.

One of the limitations of the Juno-based solution was that Neutron had no way to report the HA router state, which made troubleshooting and maintenance harder. With Kilo, operators may now run the neutron l3-agent-list-hosting-router <router_id> command and see where the active instance is currently hosted.

pic_3-3Ability to choose a specific floating IP

Floating IPs are public IPv4 addresses that can be dynamically added to a VM instance on the fly, so that the VM can be reachable from external systems, usually the Internet. Originally, when assigning a floating IP for a specific VM, the IP would be randomly picked from a pool and there was no guarantee that a VM would consistently receive the same IP address. Starting with Kilo, the user can now choose a specific floating IP address to be assigned to a given VM by utilizing a new ‘floating_ip_address’ API attribute.

MTU advertisement functionality

This new feature allow specification of the desired MTU for a network, and advertisement of the MTU to guest operating systems when it is set. This new capability will avoid MTU mismatches in networks that lead to undesirable results such as connectivity issues, packet drops and degraded network performance.

 Improved performance and stability

The OpenStack Networking community is actively working to make Neutron a more stable and mature codebase. Among the different performance and stability enhancements introduced in Kilo, I wanted to highlight two: the switch to use OVSDB directly with the ML2/Open vSwitch plugin instead of using Open vSwitch ovs-vsctl CLI commands, and comprehensive refactoring of the l3-agent code base.

While these two changes are not introducing any new feature functionality to users per se, they do represent the continuous journey of improving Neutron’s code base, especially the core L2 and L3 components which are critical to all workloads.

Looking ahead to Liberty

Liberty, the next release of OpenStack, is planned for October 15th, 2015. We are already busy planning and finalizing the sessions for the Design Summit in Vancouver, where new feature and enhancement proposals are scheduled to be discussed. You can view the approved Neutron specifications for Liberty to track what proposals are accepted to the project and expected to land in Liberty.

Get Started with OpenStack Neutron

If you want to try out OpenStack, or to check out some of the above enhancements for yourself, you are welcome to visit our RDO site. We have documentation to help get you started, forums where you can connect with other users, and community-supported packages of the most up-to-date OpenStack releases available for download.

If you are looking for enterprise-level support and our partner certification program, Red Hat also offers Red Hat Enterprise Linux OpenStack Platform.



The Kilo logo is a trademark/service mark of the OpenStack Foundation. 
Red Hat is not affiliated with, endorsed or sponsored by the OpenStack 
Foundation, or the OpenStack community.
Image source:

Driving in the Fast Lane – CPU Pinning and NUMA Topology Awareness in OpenStack Compute

by Steve Gordon, Product Manager, Red Hat — May 5, 2015

The OpenStack Kilo release, extending upon efforts that commenced during the Juno cycle, includes a number of key enhancements aimed at improving guest performance. These enhancements allow OpenStack Compute (Nova) to have greater knowledge of compute host layout and as a result make smarter scheduling and placement decisions when launching instances. Administrators wishing to take advantage of these features can now create customized performance flavors to target specialized workloads including Network Function Virtualization (NFV) and High Performance Computing (HPC).

What is NUMA topology?

Historically, all memory on x86 systems was equally accessible to all CPUs in the system. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA).

In modern multi-socket x86 systems system memory is divided into zones (called cells or nodes) and associated with particular CPUs. This type of division has been key to the increasing performance of modern systems as focus has shifted from increasing clock speeds to adding more CPU sockets, cores, and – where available – threads. An interconnect bus provides connections between nodes, so that all CPUs can still access all memory. While the memory bandwidth of the interconnect is typically faster than that of an individual node it can still be overwhelmed by concurrent cross node traffic from many nodes. The end result is that while NUMA facilitates faster memory access for CPUs local to the memory being accessed, memory access for remote CPUs is slower.

Newer motherboard chipsets expand on this concept by also providing NUMA style division of PCIe I/O lanes between CPUs. On such systems workloads receive a performance boost not only when their memory is local to the CPU on which they are running but when the I/O devices they use are too, and (relative) degradation where this is not the case. We’ll be coming back to this topic in a later post in this series.

By way of example, by running numactl --hardware on a Red Hat Enterprise Linux 7 system I can examine the NUMA layout of its hardware:

# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3
node 0 size: 8191 MB
node 0 free: 6435 MB
node 1 cpus: 4 5 6 7
node 1 size: 8192 MB
node 1 free: 6634 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10

The output tells me that this system has two NUMA nodes, node 0 and node 1. Each node has 4 CPU cores and 8 GB of RAM associated with it. The output also shows the relative “distances” between nodes, this becomes important with more complex NUMA topologies with different interconnect layouts connecting nodes together.

Modern operating systems endeavour to take the NUMA topology of the system into account by providing additional services like numad that monitor system resource usage and endeavour to dynamically make adjustments to confirm that processes and their associated memory are placed optimally for best performance.

How does this apply to virtualization?

When running a guest operating system in a virtual machine there are actually two NUMA topologies involved, that of the physical hardware of the host and that of the virtual hardware exposed to the guest operating system. The host operating system and associated utilities are aware of the host’s NUMA topology and will optimize accordingly, but by exposing a NUMA topology to the guest that aligns with that of the physical hardware it is running on we can also assist the guest operating system to do the same.

Libvirt provides extensive options for tuning guests to take advantage of the hosts’ NUMA topology by among other things, pinning virtual CPUs to physical CPUs, pinning emulator threads associated with the guest to physical CPUs, and tuning guest memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1G pages). Running the virsh capabilities command, which displays the capabilities of the host, on the same host used in the earlier example yields a wide range of information but in particular we’re interested in the <topology> section:

# virsh capabilities
          <cells num='2'>
            <cell id='0'>
              <memory unit='KiB'>4193872</memory>
              <pages unit='KiB' size='4'>1048468</pages>
              <pages unit='KiB' size='2048'>0</pages>
                <sibling id='0' value='10'/>
                <sibling id='1' value='20'/>
              <cpus num='4'>
                <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
                <cpu id='1' socket_id='0' core_id='1' siblings='1'/>
                <cpu id='2' socket_id='0' core_id='2' siblings='2'/>
                <cpu id='3' socket_id='0' core_id='3' siblings='3'/>
            <cell id='1'>
              <memory unit='KiB'>4194304</memory>
              <pages unit='KiB' size='4'>1048576</pages>
              <pages unit='KiB' size='2048'>0</pages>
                <sibling id='0' value='20'/>
                <sibling id='1' value='10'/>
              <cpus num='4'>
                <cpu id='4' socket_id='1' core_id='0' siblings='4'/>
                <cpu id='5' socket_id='1' core_id='1' siblings='5'/>
                <cpu id='6' socket_id='1' core_id='2' siblings='6'/>
                <cpu id='7' socket_id='1' core_id='3' siblings='7'/>

The NUMA nodes are each represented by a <cell> entry which lists the CPUs available within the node, the memory available within the node – including the page sizes, and the distance between the node and its siblings. This is all crucial information for OpenStack Compute to have access to when scheduling and building guest virtual machine instances for optimal placement.

CPU Pinning in OpenStack

Today we will be configuring an OpenStack Compute environment to support the pinning of virtual machine instances to dedicated physical CPU cores. To facilitate this we will walk through the process of:

  • Reserving dedicated cores on the compute host(s) for host processes, avoiding host process and guest virtual machine instances from fighting for the same CPU cores;
  • Reserving dedicated cores on the compute host(s) for the virtual machine instances themselves;
  • Enabling the required scheduler filters;
  • Creating a host aggregate to add all hosts configured for CPU pinning to;
  • Creating a performance focused flavor to target this host aggregate; and
  • Launching an instance with CPU pinning!

Finally we will take a look at the Libvirt XML of the resulting guest to examine how the changes made impact the way the guest is created on the host.

For my demonstration platform I will be using Red Hat Enterprise Linux OpenStack Platform 6 which while itself based on the OpenStack “Juno” code base includes backports to add the features referred to in this post. You can obtain an evaluation copy, or try out the Kilo-based packages currently being released by the RDO community project.

Compute Node Configuration

For the purposes of this deployment I am using a small environment with a single controller node and two compute nodes, setup using PackStack. The controller node hosts the OpenStack API services, databases, message queues, and the scheduler. The compute nodes run the Compute agent, Libvirt, and other components required to actually launch KVM virtual machines.

The hosts being used for my demonstration have eight CPU cores, numbered 0-7, spread across two NUMA nodes. NUMA node 0 contains CPU cores 0-3 while NUMA node 1 contains CPU cores 4-7. For the purposes of demonstration I am going to reserve two cores for host processes on each NUMA node – cores 0, 1, 4, and 5.

In a real deployment the number of processor cores to reserve for host processes will be more or less depending on the observed performance of the host in response to the typical workloads present in the environment and will need to be modified accordingly.

The remaining four CPU cores – cores 2, 3, 6, and 7 – will be removed from the pool used by the general kernel balancing and scheduling algorithms to place processes and isolated specifically for use when placing guest virtual machine instances. This is done by using the isolcpus kernel argument.

In this example I will be using all of these isolated cores for guests, in some deployments it may be desirable to instead dedicate one or more of these cores to an intensive host process, for example a virtual switch, by manually pinning it to an isolated CPU core as well.

Node 0

Node 1

Host Processes

Core 0

Core 1

Core 4

Core 5


Core 2

Core 3

Core 6

Core 7

On each Compute node that pinning of virtual machines will be permitted on open the /etc/nova/nova.conf file and make the following modifications:

  • Set the vcpu_pin_set value to a list or range of physical CPU cores to reserve for virtual machine processes. OpenStack Compute will ensure guest virtual machine instances are pinned to these CPU cores. Using my example host I will reserve two cores in each NUMA node – note that you can also specify ranges, e.g. 2-3,6-7:
    • vcpu_pin_set=2,3,6,7
  • Set the reserved_host_memory_mb to reserve RAM for host processes. For the purposes of testing I am going to use the default of 512 MB:
    • reserved_host_memory_mb=512

Once these changes to the Compute configuration have been made, restart the Compute agent on each host:

# systemctl restart openstack-nova-compute.service

At this point if we created a guest we would already see some changes in the XML, pinning the guest vCPU(s) to the cores listed in vcpu_pin_set:

<vcpu placement='static' cpuset='2-3,6-7'>1</vcpu>

Now that we have set up the guest virtual machine instances so that they will only be allowed to run on cores 2, 3, 6, and, 7 we must also set up the host processes so that they will not run on these cores – restricting themselves instead to cores 0, 1, 4, and 5. To do this we must set the isolcpus kernel argument – adding this requires editing the system’s boot configuration.

On the Red Hat Enterprise Linux 7 systems used in this example this is done using grubby to edit the configuration:

# grubby --update-kernel=ALL --args="isolcpus=2,3,6,7"

We must then run grub2-install <device> to update the boot record. Be sure to specify the correct boot device for your system! In my case the correct device is /dev/sda:

# grub2-install /dev/sda

The resulting kernel command line used for future boots of the system to isolate cores 2, 3, 6, and 7 will look similar to this:

linux16 /vmlinuz-3.10.0-229.1.2.el7.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet LANG=en_US.UTF-8 isolcpus=2,3,6,7

Remember, these are cores we want the guest virtual machine instances to be pinned to. After running grub2-install reboot the system to pick up the configuration changes.

Scheduler Configuration

On each node where the OpenStack Compute Scheduler (openstack-nova-scheduler) runs edit /etc/nova/nova.conf. Add the AggregateInstanceExtraSpecFilter and NUMATopologyFilter values to the list of scheduler_default_filters. These filters are used to segregate the compute nodes that can be used for CPU pinning from those that can not and to apply NUMA aware scheduling rules when launching instances:


Once the change has been applied, restart the openstack-nova-scheduler service:

# systemctl restart openstack-nova-scheduler.service

This will provide for the configuration changes to be applied and the newly added scheduler filters to be added.

Final Preparation

We are now very close to being able to launch virtual machine instances marked for dedicated compute resources and pinned to physical resources accordingly. Perform the following steps on a system with the OpenStack Compute command-line interface installed and with your OpenStack credentials loaded.

Create the performance host aggregate for hosts that will received pinning requests:

$ nova aggregate-create performance
| Id | Name        | Availability Zone | Hosts | Metadata |
| 1  | performance | -                 |       |          |

Set metadata on the performance aggregate, this will be used to match the flavor we create shortly – here we are using the arbitrary key pinned and setting it to true:

$ nova aggregate-set-metadata 1 pinned=true
Metadata has been successfully updated for aggregate 1.
| Id | Name        | Availability Zone | Hosts | Metadata      |
| 1  | performance | -                 |       | 'pinned=true' |

Create the normal aggregate for all other hosts:

$ nova aggregate-create normal
| Id | Name   | Availability Zone | Hosts | Metadata |
| 2  | normal | -                 |       |          |

Set metadata on the normal aggregate, this will be used to match all existing ‘normal’ flavors – here we are using the same key as before and setting it to false.

$ nova aggregate-set-metadata 2 pinned=false
Metadata has been successfully updated for aggregate 2.
| Id | Name   | Availability Zone | Hosts | Metadata       |
| 2  | normal | -                 |       | 'pinned=false' |

Before creating the new flavor for performance intensive instances update all existing flavors so that their extra specifications match them to the compute hosts in the normal aggregate:

$ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; \
     do nova flavor-key ${FLAVOR} set \
             "aggregate_instance_extra_specs:pinned"="false"; \

Create a new flavor for performance intensive instances. Here we are creating the m1.small.performance flavor, based on the values used in the existing m1.small flavor. The differences in behaviour between the two will be the result of the metadata we add to the new flavor shortly.

$ nova flavor-create m1.small.performance 6 2048 20 2
| ID | Name                 | Memory_MB | Disk | Ephemeral | Swap | VCPUs |
| 6  | m1.small.performance | 2048      | 20   | 0         |      | 2     |

Set the hw:cpy_policy flavor extra specification to dedicated. This denotes that all instances created using this flavor will require dedicated compute resources and be pinned accordingly.

$ nova flavor-key 6 set hw:cpu_policy=dedicated

Set the aggregate_instance_extra_specs:pinned flavor extra specification to true. This denotes that all instances created using this flavor will be sent to hosts in host aggregates with pinned=true in their aggregate metadata:

$ nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true

Finally, we must add some hosts to our performance host aggregate. Hosts that are not intended to be targets for pinned instances should be added to the normal host aggregate:

$ nova aggregate-add-host 1 compute1.nova
Host compute1.nova has been successfully added for aggregate 1 
| Id | Name        | Availability Zone | Hosts          | Metadata      |
| 1  | performance | -                 | 'compute1.nova'| 'pinned=true' |
$ nova aggregate-add-host 2 compute2.nova
Host compute2.nova has been successfully added for aggregate 2
| Id | Name        | Availability Zone | Hosts          | Metadata      |
| 2  | normal      | -                 | 'compute2.nova'| 'pinned=false'|

Verifying the Configuration

Now that we’ve completed all the configuration, we need to verify that all is well with the world. First, we launch a guest using our newly created flavor:

$ nova boot --image rhel-guest-image-7.1-20150224 \
            --flavor m1.small.performance test-instance

Assuming the instance launches, we can verify where it was placed by checking the OS-EXT-SRV-ATTR:hypervisor_hostname attribute in the output of the nova show test-instance command. After logging into the returned hypervisor directly using SSH we can use the virsh tool, which is part of Libvirt, to extract the XML of the running guest:

# virsh list
 Id        Name                               State
 1         instance-00000001                  running
# virsh dumpxml instance-00000001

The resultant output will be quite long, but there are some key elements related to NUMA layout and vCPU pinning to focus on:

  • As you might expect the vCPU placement for the 2 vCPUs remains static though a cpuset range is no longer specified alongside it – instead the more specific placement definition defined later on are used:
<vcpu placement='static'>2</vcpu>
  • The vcpupin, and emulatorpin elements have been added. These pin the the virtual machine instance’s vCPU cores and the associated emulator threads respectively to physical host CPU cores. In the current implementation the emulator threads are pinned to the union of all physical CPU cores associated with the guest (physical CPU cores 2-3).
<vcpupin vcpu='0' cpuset='2'/>
<vcpupin vcpu='1' cpuset='3'/>
<emulatorpin cpuset='2-3'/>
  • The numatune element, and the associated memory and memnode elements have been added – in this case resulting in the guest memory being strictly taken from node 0.
        <memory mode='strict' nodeset='0'/>
        <memnode cellid='0' mode='strict' nodeset='0'/>
  • The cpu element contains updated information about the NUMA topology exposed to the guest itself, the topology that the guest operating system will see:
        <topology sockets='2' cores='1' threads='1'/>
          <cell id='0' cpus='0-1' memory='2097152'/>

In this case as the new flavor introduced, and as a result the example guest, only contains a single vCPU the NUMA topology exposed is relatively simple. None the less the guest will still benefit from the performance improvements available through the pinning of its virtual CPU and memory resources to dedicated physical ones. This of course comes at the cost of implicitly disabling overcommitting of these same resources – the scheduler handles this transparently when CPU pinning is being applied. This a trade off that needs to be carefully balanced depending on workload characteristics.

In future blog posts in this series we will use this same example installation to look at the how OpenStack Compute works when dealing with larger and more complex guest topologies, the use of large pages to back guest memory, and the impact of PCIe device locality for guests using SR-IOV networking functionality.

Want to learn more about OpenStack Compute or the Libvirt/KVM driver for it? Catch my OpenStack Compute 101 and Kilo Libvirt/KVM Driver Update presentations at OpenStack Summit Vancouver – May 18-22.

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Walking Through the Implementation

by Itzik Brown, QE Engineer focusing on OpenStack Neutron, Red Hat — April 29, 2015
and Nir Yechiel

In the previous blog post in this series we looked at what single root I/O virtualization (SR-IOV) networking is all about and we discussed why it is an important addition to Red Hat Enterprise Linux OpenStack Platform. In this second post we would like to provide a more detailed overview of the implementation, some thoughts on the current limitations, as well as what enhancements are being worked on in the OpenStack community.

Note: this post does not intend to provide a full end to end configuration guide. Customers with an active subscription are welcome to visit the official article covering SR-IOV Networking in Red Hat Enterprise Linux OpenStack Platform 6 for a complete procedure.


Setting up the Environment

In our small test environment we used two physical nodes: one serves as a Compute node for hosting virtual machine (VM) instances, and the other serves as both the OpenStack Controller and Network node. Both nodes are running Red Hat Enterprise Linux 7.


Compute Node

This is a standard Red Hat Enterprise Linux OpenStack Platform Compute node, running KVM with the Libvirt Nova driver. As the ultimate goal is to provide OpenStack VMs running on this node with access to SR-IOV virtual functions (VFs), SR-IOV support is required on several layers on the Compute node, namely the BIOS, the base operating system, and the physical network adapter. Since SR-IOV completely bypasses the hypervisor layer, there is no need to deploy Open vSwitch or the ovs-agent on this node.


Controller/Network Node

The other node which serves as the OpenStack Controller/Network node includes the various OpenStack API and control services (e.g., Keystone, Neutron, Glance) as well as the Neutron agents required to provide network services for VM instances. Unlike the Compute node, this node still uses Open vSwitch for connectivity into the tenant data networks. This is required in order to serve SR-IOV enabled VM instances with network services such as DHCP, L3 routing and network address translation (NAT). This is also the node in which the Neutron server and the Neutron plugin are deployed.


Topology Layout

For this test we are using a VLAN tagged network connected to both nodes as the tenant data network. Currently there is no support for SR-IOV networking on the Network node, so this node still uses a normal network adapter without SR-IOV capabilities. The Compute node on the other hand uses an SR-IOV enabled network adapter (from the Intel 82576 family in our case).
Screen Shot 2015-04-29 at 2.28.49 PM


Configuration Overview

Preparing the Compute node

  1. The first thing we need to do is to make sure that Intel VT-d is enabled in the BIOS and activated in the kernel. The Intel VT-d specifications provide hardware support for directly assigning a physical device to a virtual machine.
  2. Recall that the Compute node is equipped with an Intel 82576 based SR-IOV network adapter. For proper SR-IOV operation, we need to load the network adapter driver (igb) with the right parameters to set the maximum number of Virtual Functions (VFs) we want to expose. Different network cards support different values here, so you should consult the proper documentation from the card vendor. In our lab we chose to set this number to seven. This configuration effectively enables SR-IOV on the card itself, which otherwise defaults to regular (non SR-IOV) mode.
  3. After a reboot, the node should come up ready for SR-IOV. You can verify this by utilizing the lspci utility that lists detailed information about all PCI devices in the system.


Verifying the Compute configuration

Using lspci we can see the Physical Functions (PFs) and the Virtual Functions (VFs) available to the Compute node. Our network adapter is a dual port card, so we get total of two PFs available (one PF per physical port), and seven VFs available for each PF:


# lspci  | grep -i 82576

05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

05:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

05:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)


You can also get all the VFs assigned to a specific PF:


# ls -l /sys/class/net/enp5s0f1/device/virtfn*


lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn0 -> ../0000:05:10.1

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn1 -> ../0000:05:10.3

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn2 -> ../0000:05:10.5

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn3 -> ../0000:05:10.7

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn4 -> ../0000:05:11.1

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn5 -> ../0000:05:11.3

lrwxrwxrwx. 1 root root 0 Jan 25 13:22 /sys/class/net/enp5s0f1/device/virtfn6 -> ../0000:05:11.5


One parameter you will need to capture for future use is the PCI vendor ID (in vendor_id:product_id format) of your network adapter. This can be extracted from the output of the lspci command with -nn flag. Here is the output from our lab (the PCI vendor ID marked in bold):


# lspci  -nn | grep -i 82576

05:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01)

05:00.1 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01)

05:10.0 Ethernet controller [0200]: Intel Corporation 82576 Virtual Function [8086:10ca] (rev 01)

Note: this parameter may be different based on your network adapter hardware.

Setting up the Controller/Network node

  1. In the Neutron server configuration, ML2 should be configured as the core Neutron plugin. Both the Open vSwitch (OVS) and SR-IOV (sriovnicswitch) mechanism drivers need to be loaded.
  2. Since our design requires a VLAN tagged tenant data network, ‘vlan’ must be listed as a type driver for ML2. Other alternative would be to use ‘flat’ networking configuration which allows transparent forwarding with no specific VLAN tag assignment.
  3. The VLAN configuration itself is done through Neutron ML2 configuration file, where you can set the appropriate VLAN range and the physical network label. This is the VLAN range you need to make sure is properly configured for transport (i.e., trunking) across the physical network fabric. We are using ‘sriovnet’ as our network label with 80-85 as the VLAN range: network_vlan_ranges = sriovnet:80:85
  4. One of the great benefits of the SR-IOV ML2 driver is the fact that it is not bound to any specific NIC vendor or card model. The ML2 driver can be used with different cards as long as they support the standard SR-IOV specification. As Red Hat Enterprise Linux OpenStack Platform is supported on top of Red Hat Enterprise Linux, we inherit RHEL rich support of SR-IOV enabled network adapters. In our lab we use the igb/igbvf driver which is included in RHEL 7 and being used to interact with our Intel SR-IOV NIC. To set up the ML2 driver so that it can communicate properly with our Intel NIC, we need to configure the PCI vendor ID we captured earlier in the ML2 SR-IOV configuration file (under supported_pci_vendor_devs), then restart the Neutron server. The format of this config is product_id:vendor_id  which is 8086:10ca in our case.
  5. To allow proper scheduling of SR-IOV devices, Nova scheduler needs to use the FilterScheduler with the PciPassthroughFilter filter. This configuration should be applied on the Controller node under the nova.conf file.


Mapping the required network

To enable scheduling of SR-IOV devices, the Nova PCI Whitelist has been enhanced to allow tags to be associated with PCI devices. PCI devices available for SR-IOV networking should be tagged with physical network label. The network label needs to match the label we used previously when setting the VLAN configuration in the Controller/Network node (‘sriovnet’).

Using the pci_passthrough_whitelist under the Nova configuration file, we can map the VFs to the required physical network. After configuring the whitelist there is a need to restart the nova-compute service for the changes to take effect.

In the below example, we set the Whitelist so that the Physical Function (enp5s0f1) is associated with the physical network (sriovnet). As a result, all the Virtual Functions bound to this PF can now be allocated to VMs.

# pci_passthrough_whitelist={“devname”: “enp5s0f1″, “physical_network”:”sriovnet”}


Creating the Neutron network

Next we will create the Neutron network and subnet; Make sure to use the –provider:physical_network option and specify the network label as was configured on the Controller/Network node (‘sriovnet’). Optionally, you can also set a specific VLAN ID from  the range:

# neutron net-create sriov-net1 –provider:network_type=vlan –provider:physical_network=sriovnet –provider:segmentation_id=83


# neutron subnet-create sriov-net1


Creating an SR-IOV instance

After setting up the base configuration on the Controller/Network node and Compute node, and after creating the Neutron network, now we can go ahead and create our first SR-IOV enabled OpenStack instance.

In order to boot a Nova instance with an SR-IOV networking port, you first need to create the Neutron port and specify its vnic-type as ‘direct’. Then in the ‘nova boot’ command you will need to explicitly reference the port-id you have created using the –nic option as shown below:


# neutron port-create <sriov-net1 net-id> –binding:vnic-type direct

# nova boot –flavor m1.large –image <image>  –nic port-id=<port> <vm name>


Examining the results

  • On the Compute node, we can now see that one VF has been allocated:

# ip link show enp5s0f1

12: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000

link/ether 44:1e:a1:73:3d:ab brd ff:ff:ff:ff:ff:ff

vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

vf 5 MAC fa:16:3e:0e:3f:0d, vlan 83, spoof checking on, link-state auto

vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto


In the above example enp5s0f1 is the Physical Function (PF), and ‘vf 5’ is allocated to a instance. [We can tell that because it shows with a specific MAC address][this phrase read oddly to me, re-read to confirm that it says what you intend for it to say], and configured with VLAN ID 83 which was allocated based on our configuration.

  • On the Compute node, we can also verify the virtual interface definition on Libvirt XML:

Locate the instance_name and of the VM and the hypervisor it is running on:

# nova show <vm name>

The relevant fields are OS-EXT-SRV-ATTR:host and OS-EXT-SRV-ATTR:instance_name.


On the compute node run:

# virsh dumpxml <instance_name>



<interface type=’hostdev’ managed=’yes’>

<mac address=’fa:16:3e:0e:3f:0d’/>

<driver name=’vfio’/>


<address type=’pci’ domain=’0x0000′ bus=’0x05′ slot=’0x10′ function=’0x5’/>



<tag id=’83’/>


<alias name=’hostdev0’/>

<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x04′ function=’0x0’/>



  • On the virtual machine instance, running the ‘ifconfig’ command shows a ‘eth0’ interface exposed the the guest operating system with IP address assigned:


eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400

inet  netmask  broadcast

inet6 fe80::f816:3eff:feb9:d855  prefixlen 64  scopeid 0x20<link>

ether fa:16:3e:0e:3f:0d  txqueuelen 1000  (Ethernet)

RX packets 182  bytes 25976 (25.3 KiB)


Using ‘lspci’ in the instance we can see that the the interface is indeed a PCI device:


# lspci  | grep -i 82576

00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

Using ‘ethtool’ in the instance we can see that the interface driver is ‘igbvf’ which is Intel’s driver for 82576 Virtual Functions:


# ethtool -i eth0

driver: igbvf

version: 2.0.2-k


bus-info: 0000:00:04.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: no

supports-register-dump: yes

supports-priv-flags: no


As you can see, the interface behaves as a regular one from the instance point of view, and can be used by any application running inside the guest. The interface was also assigned an IPv4 address from Neutron which means that we have proper connectivity to the Controller/Network node where the DHCP server for this network resides.


As the interface is directly attached to the network adapter and the traffic does not flow through any virtual bridges on the Compute node, it’s important to note that Neutron security groups cannot be used with SR-IOV enabled instances.


What’s Next?

Red Hat Enterprise Linux OpenStack Platform 6 is the first version in which SR-IOV networking was introduced. While the ability to bind a VF into a Nova instance with an appropriate Neutron network is available, we are still looking to enhance the feature to address more use cases as well as to simplify the configuration and operation.

Some of the items we are currently considering include the ability to plug/unplug a SR-IOV port on the fly which is currently not available, launching an instance with a SR-IOV port without explicitly creating the port first, and the ability to allocate an entire PF to a virtual machine instance. There is also active work to enable Horizon (Dashboard) support.

One other item is Live Migration support. An SR-IOV Neutron port may be directly connected to its VF as shown above (vnic_type ‘direct’) or it may be connected with a macvtap device that resides on the Compute (vnic_type ‘macvtap’), which is then connected to the corresponding VF. The macvtap option provides a baseline for implementing Live Migration for SR-IOV enabled instances.

Interested in trying the latest OpenStack-based cloud platform from the world’s leading provider of open source solutions? Download a free evaluation of Red Hat Enterprise Linux OpenStack Platform 6 or learn more about it from the product page.


OpenStack Summit Vancouver: Agenda Confirms 40+ Red Hat Sessions

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — April 2, 2015

As this Spring’s OpenStack Summit in Vancouver approaches, the Foundation has now posted the session agenda, outlining the final schedule of events. I am very pleased to report that Red Hat and eNovance have more than 40 approved sessions that will be included in the weeks agenda, with a few more approved as joint partner sessions, and even a few more as waiting alternates.

This vote of confidence confirms that Red Hat and eNovance continue to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in and concerned with.

Red Hat is also a headline sponsor in Vancouver this Spring, along with Intel, SolidFire, and HP, and will have a dedicated keynote presentation, along with the 40+ accepted sessions. To learn more about Red Hat’s accepted sessions, have a look at the details below. Be sure to visit us at the below sessions and at our booth (#H4). We look forward to seeing you in Vancouver in May!

For more details on each session, click on the title below:

Monday May 18th





What’s up Swift? The globally distributed community behind Swift (PANEL)

Christian Schwede



Dynamic Policy for Access Control

Adam Young


Kilo Libvirt/KVM Driver Update

Steve Gordon


Unobtrusive Intrusion Detection in Openstack

Dan Lambright


Manila: An Update on OpenStack’s Shared File Services Program

Sean Cohen

Bob Callaway

Mark Sturdevant


Optimizing Contributions with Globally Distributed Teams

Beth Cohen

Diane Mueller-Klingspor

Karin Levenstein

Fernando Oliveira Kamesh Pemmaraju


The life of an OpenStack contributor in animated GIFs

Flavio Percoco

Emilien Macchi

Schmouel Boudjnah


Enhancing OpenStack Projects with Advanced SLA and Scheduling

Sylvain Bauza

Donald Dugger


OpenStack Horizon deep dive and customization

Matthias Runge


Storage security in a critical enterprise OpenStack environment

Sage Weil

Danny Al-Gaaf


Ambassadors community report (PANEL)

Erwan Gallan


Deploying OpenStack clouds with Stackforge Puppet modules

Emilien Macchi

Matt Fischer

Mike dorman


From Archive to Insight: Debunking Myths of Analytics on Object Stores

Luis Pabon

Bill Owen

Simon Lorenz

Dean Hildebrand


Stabilizing the Jenga Tower: Scaling out Ceilometer

Gordon Chung

Tuesday May 19th





APIs Matter

Chris Dent

Jay Pipes


Ceph and OpenStack: current integration and roadmap

Josh Durgin

Sébastien Han


FlexPod with Red Hat Enterprise Linux OpenStack Platform 6

Dave Cain

Eric Railine


OpenStack Compute 101

Steve Gordon


OpenStack and OpenDaylight: The Way Forward

Chris Wright

Kyle Mestery

Colin Dixon


Public or Private Cloud, Amazon Web Services or OpenStack – what’s the difference and can I use both?

Jonathan Gershater

Wednesday May 20th





Deep Dive Into a Highly Available OpenStack Architecture

Arthur Berezin


How Neutron builds network topology for your multi-tier application?

Sadique Puthen


Ask the Experts: Are Containers a Threat to OpenStack? (PANEL)

Jan Mark Holzer



OpenStack Infrastructure Management with ManageIQ

John Hardy


Extending OpenStack Swift to Support Third Party Storage Systems

Luis Pabon

Prashanth Pai

Pete Zaitcev


Monitoring your Swift cluster health

Christian Schwede


Keystone advanced authentication methods

Nathan Kinder

Steve Martinelli

Henry Nash


Keeping OpenStack storage trendy with Ceph and containers

Sage Weil


State of SSL in Barbican

Ade Lee

Chelsea Winfree

John Wood


Telus Private Cloud POC with RHEL OpenStack Platform on FlexPod

Michael Bagg

Dimitar Ivanov


The Road to Enterprise-Ready OpenStack Storage as Service

Sean Cohen

Flavio Percoco

Thursday May 21st





OpenDaylight and OpenStack

Dave Neary


Ask the Experts: Designing Storage for the Enterprise

Neil Levine



Bare Metal Hadoop and OpenStack: Together at Last!

Keith Basil


The anatomy of an action: mining the event storm

Gordon Chung

Vladik Romanovsky


IPv6 impact on Neutron L3 HA

Sridhar Godham

Numan Siddique


Lessons learned on upgrades: the importance of HA and automation

Emilien Macchi

Frédéric Lepied


The Next Step of OpenStack Evolution for NFV Deployments

Chris Wright

Dirk Kutscher


A DevOps State of Mind

Chris Van Tuin


Pacemaker: OpenStack’s PID 1

David Vossel


Don’t change my mindset, I am not that open

Nick Barcet

Alexis Monville

An ecosystem of integrated cloud products

by Jonathan Gershater — March 27, 2015

In my prior post, I described how OpenStack from Red Hat frees  you to pursue your business with the peace of mind that your cloud is secure and stable. Red Hat has several products that enhance OpenStack to provide cloud management, virtualization, a developer platform, and scalable cloud storage.

Cloud Management with Red Hat CloudForms            

CloudForms contains three main components

  • Insight – Inventory, Reporting, Metrics red-hat-cloudforms-logo
  • Control – Eventing, Compliance, and State Management
  • Automate – Provisioning, Reconfiguration, Retirement, and Optimization


Business Benefit Use Case
One unified tool to manage virtualization and OpenStack cloud reduces the IT management overhead of multiple consoles and tools. Manage your Red Hat Virtualization, OpenStack, and VMware vSphere infrastructure with one tool, Cloud Forms.
One unified tool to manage private OpenStack and public cloud with the three components above. For temporary capacity needs, you can burst to an Amazon or OpenStack public cloud.

Scale up with Red Hat Enterprise Virtualization

Virtualization improves efficiency, frees up resources, and cuts costs. red-hat-enterprise-visualization-logo

And as you plan for the cloud, it’s important to build common services that use your virtualization investment and the cloud, while avoiding vendor dependency.

Business Benefit Use Case
Consolidate your physical servers, lower costs and improve efficiency. Run and enterprise applications like Oracle, SAP, SAS, Microsoft Exchange and other traditional applications on virtual servers.

Red Hat Ceph Storage                           

Ceph™ is a massively scalable, software-defined storage system that runs on commodity hardware. It provides a unified solution for cloud computing environments and manages block, object, and image storage. red-hat-ceph-storage-logo


Business Benefit Use Case
The Red Hat Enterprise Linux OpenStack Platform installer automatically configures the included storage driver and Ceph clients. Store virtual machine images, volumes, and snapshots or Swift object storage for tenant applications.


Platform as a Service with Red Hat OpenShift            

An on-premise, private Platform as a Service (PaaS) solution offering that allows you to deliver apps faster and meet your enterprise’s growing application demands. openshift-by-red-hat-logo

Business Benefit Use Case
Accelerate IT service delivery and streamline application development. Choice of programming languages and frameworks, databases and development tools allows your developers to get the job done, using the languages and tools they already know and trust. Including:

  • Web Console, Command-line, or IDE
  • Java(EE6), Ruby, PHP, Python, and Perl

With an open hybrid cloud infrastructure from Red Hat, your IT organization can better serve your business by delivering more agile and flexible solutions while protecting business assets and preparing for the future.



An OpenStack Cloud that frees you to pursue your business

by Jonathan Gershater — March 26, 2015

As your IT evolves toward an open, cloud-enabled data center, you can take advantage of OpenStack’s benefits: broad industry support, vendor neutrality, and fast-paced innovation.

As you move into implementation, your requirements for an OpenStack solutions shares a familiar theme: enterprise-ready, fully supported, and seamlessly-integrated products.

Can’t we just install and manage OpenStack ourselves?

OpenStack is an open source project and freely downloadable. To install and maintain OpenStack you need to recruit and retain engineers trained in Python and other technologies. If you decide to go it alone consider:

  1. How do you know OpenStack works with your hardware?
  2. Does OpenStack work with your guest instances?
  3. How do you manage and upgrade OpenStack?
  4. When you encounter problems, consider how you would solve them? Some examples:
Problem scenario Using OpenStack from Red Hat Do it yourself
Security breach Dispatch a team of experts to assess. Issue a hotfix (and contribute the fix upstream for the benefit of all). Rely on your own resources to assess. Wait for a fix from the upstream community.
Change of hardware/driver update etc Certified hardware and software partners continuously jointly develop and test. Issue a hotfix (and contribute the fix upstream for the benefit of all). Contact the hardware provider, sometimes a best guess effort for un-supported and un-certified software.
Application problem Red Hat consulting services assess and determines if the problem is with the application, OpenStack configuration, guest instance, hypervisor or host Red Hat Enterprise Linux. Red Hat works across the stack to resolve and fix, Troubleshooting across the stack involves different vendors who do not have joint certified solutions. Fixes come from a variety of sources or your own limited resources.


Thus the benefits of using OpenStack from Red Hat are:


Certified Hardware partners                            fingerpointing

Red Hat has a broad ecosystem of certified hardware. for OpenStack Red Hat is a premier member of TSANet that provides support and interoperability across vendor solutions.

Business Benefit Use Case
Provides a stable and long-term OpenStack cloud deployment. Helps you provide a high SLA to your customers. When problems arise you need solutions, not fingerpointing. The value of certified partners means Red Hat and its partners work together to resolve problems.


Certified software vendors                            certified

Red Hat Enterprise Linux OpenStack Platform provides an open pluggable framework for compute (nova), networking (neutron), and storage (cinder/glance) partner solutions.

Business Benefit Use Case
Choice. You are not locked into any one provider.You are not locked into one pricing and licensing model. You can integrate with an existing or select a new hypervisor, networking, or storage solution that meets your needs and changes as your future business demands evolve.


Certified guest OS on Instance/VM    four-virtual-machines-clipart

An OpenStack cloud platform runs virtualized guest instances with an operating system. OpenStack from Red Hat is certified to run Microsoft Windows (see certifications), Red Hat Enterprise Linux, and SUSE guest operating systems. Other operating systems are supported per this model


Business Benefit Use Case
The cloud provider can provide a stable platform with higher SLA since there is support across the stack. If there is a problem with a guest/instance, you are not alone getting support, Red Hat works with the O/S provider to resolve the problem.


Integrated with Linux and the Hypervisor    stack                               

As Arthur detailed in his blog post, OpenStack requires a hypervisor to run virtual machines, manage CPU, memory, networking, storage resources, security and drivers. Read how Red Hat helped a customer solve their problem across the stack

Business Benefit Use Case
For support and maintenance, Red Hat Enterprise Linux Openstack Platform co-engineers the hypervisor, operating system, and OpenStack services to create a production-ready platform. If you encounter a performance, scalability, or security problem that requires driver level, kernel, Linux or libvirt expertise, Red Hat is equipped to resolve.


A secure solution                                    EAL4-logo

Red Hat is the lead developer of SELinux which helps secure Linux. Red Hat has a team of security researchers and developers constantly hardening Red Hat Enterprise Linux.

Business Benefit Use Case
You can provide an OpenStack cloud that has government and military grade Common Criteria EAL4 and other certifications. Financial, Healthcare, Retail and other sectors benefit from the military grade security. Should a breach occur, you have one number to call to a team of experts that can diagnose the problem across the entire stack: operating system, hypervisor, and OpenStack.


Services expertise                    

Red Hat has extensive OpenStack design, deployment, and expert training experience across vertical industries and backed by proven Reference Architectures.

Business Benefit

You are not alone but have a trusted partner as you walk the private cloudjourney to deploy and integrate OpenStack in your environment.

Use Case

  1. Design, deploy, upgrade
  2. High availability
  3. Create an Open Hybrid Cloud
  4. Add Platform-as-Service
  5. ……

Lifecycle support                                     ???????????????????????????????????????????????????????????????????????????????????????????????????????

Upgrades, patches, and 24×7 support. OpenStack from Red Hat offers three years of Production life-cycle support and the underlying Red Hat Enterprise Linux has ten years of life-cycle support.

Business Benefit Use Case
Provides a long-term and stable OpenStack cloud platform.
With Red Hat’s  24×7 support, you can provide a high SLA to your customers.
Obtaining latest features and fixes in a new release of OpenStack, allows you to meet user requirements, with Red Hat testing and validation..

Upstream compatibility                            compatibility

Red Hat OpenStack is fully compatible with the upstream openstack community code.

Business Benefit Use Case
Red Hat is a Platinum member of the 200+ member foundation that drives the direction of OpenStack. You can have confidence that the Red Hat distribution adheres to the community direction and is not a one-off or “forked” solution. Red Hat represents your needs in the community. Red Hat’s commitment and leadership in the OpenStack community, helps ensure that customer needs are more easily introduced, developed, and delivered.

Contributions to the OpenStack project       Juno            

Red Hat is a leader in the greater Linux and OpenStack communities. Red Hat is the top contributor to the last four OpenStack releases and provides direction to several related open source projects.

Business Benefit Use Case
Using Red Hat Enterprise Linux Openstack Platform gives you a competitive advantage. Red Hat is intimately familiar with the code to best provide support, insight, guidance, and influence over feature development. If you need support or new features as your OpenStack cloud evolves, you can be confident that Red Hat will assist you and continue to evolve with the upstream project.


Red Hat offers you proven open source technology, a large ecosystem of certified hardware and software partners and expert consulting services to free you up to pursue your business. In part two of this post, I will elaborate on the integrated products Red Hat offers to build and manage an Open Hybrid Cloud.

RHEV0036_RHELOSP_ForRHEV_Diagram_INC0202135_1214swThe OpenStack mark is either a registered trademark/servicemark or trademark/service mark of the OpenStack Foundation, in the United States and other countries, and is used with the OpenStack Foundation’s permission.  We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.




















Co-Engineered Together: OpenStack Platform and Red Hat Enterprise Linux

by Arthur Berezin — March 23, 2015

OpenStack is not a software application that just runs on top of any random Linux. OpenStack is tightly coupled to the operating system it runs on and choosing the right Linux  operating system, as well as an OpenStack platform, is critical to provide a trusted, stable, and fully supported OpenStack environment.

OpenStack is an Infrastructure-as-a-Service cloud management platform, a set of software tools, written mostly in Python, to manage hosts at large scale and deliver an agile, cloud-like infrastructure environment, where multiple virtual machine Instances, block volumes and other infrastructure resources can be created and destroyed rapidly on demand.

ab 1

In order to implement a robust IaaS platform providing API access to low level infrastructure components, such as block volumes, layer 2 networks, and others, OpenStack leverages features exposed by the underlying operating system subsystems, provided by Kernel space components, virtualization, networking, storage subsystems, hardware drivers, and services that rely on the operating system’s capabilities.

Exploring how OpenStack Flows

OpenStack operates as the orchestration and operational management layer on top of many existing features and services. Let’s first examine how Nova Compute service is implemented to better understand the OpenStack design concept. Nova is implemented through 4 Linux services: nova-api which is responsible to accept Nova’s API calls, nova-scheduler which implements a weighting and filtering mechanism to schedule creation of new instances, nova-conductor which makes database operations, and nova-compute which is responsible to create and destroy the actual instances. A message-bus, implemented through Oslo Messaging and instantiated since Red Hat Enterprise Linux OpenStack Platform 5 using  RabbitMQ, is used for services inner-communication.

To create and destroy instances, which are usually implemented as virtual machines, nova-compute service uses a supportive backend driver to make libvirt API calls, while Libvirt manages qemu-kvm virtual machines on the host.

All OpenStack services are implemented in a similar manner using integral operating system components. Each OpenStack distribution may decide on using different implementations. Here we will focus on the implementation choices made with Red Hat Enterprise Linux OpenStack Platform 6. For example the DHCP service is implemented using dnsmasq service, Security Groups are implemented using Linux IPTables, Cinder commonly uses LVM Logical Volumes for block volumes and scsi-target-utils to share tgt volumes over iSCSI protocol.ab 2

This is an oversimplified description of the complete picture and many additional sub-systems are also at play, such as SELinux with corresponding SELinux policies for each service and files in use, Kernel namespaces, hardware drivers and many others.

When deploying OpenStack in a highly available configuration, which is commonly found in real-world production environments, the story becomes even more complex with HAProxy load-balancing traffic, Pacemaker active-active clusters that use multicast for heartbeat, Bonding the Network Interfaces using Kernel’s LACP bonding modules, Galera which implements a database multi-master replication mechanism across the controller nodes, and RabbitMQ message broker which uses an internal queue mirroring mechanism across controller nodes.

Co-engineering the Operating system with OpenStack

Red Hat’s OpenStack technologies are purposefully co-engineered with the Red Hat Enterprise Linux operating system, and integrated with all its subsystems, drivers, and supportive components – to help deliver a trusted, long-term stability, and a fully supported, production-ready, OpenStack environment.

Red Hat is uniquely positioned to support customers effectively across the entire stack, we maintain an engineering presence that proactively works together with each of the communities across the entire stack, starting with the Linux Kernel, all the way up to the hypervisor and virtualized guest operating system. In addition Red Hat Enterprise Linux OpenStack Platform maintains the largest OpenStack-certified partner ecosystem, working closely with  OpenStack vendors to certify 3rd party solutions, and work through support cases when external solutions are involved.ab 3

Red Hat Enterprise Linux OpenStack Platform also benefits from the rich hardware certification ecosystem Red Hat Enterprise Linux offers working with major hardware vendors to provide driver compatibility. For example, the Neutron single root I/O virtualization (SR-IOV) feature is built on the top of the SR-IOV certified Kernel driver. Similarly, the support for tunneling protocols (VXLAN, NVGRE) to offload, which is key for performance, is derived from the Red Hat Enterprise Linux driver support.

We are doing this not only to deliver world-class, production-ready support for the whole platform stack, but also to drive new features requested by customers –  since adding new functionality to OpenStack often requires invasive changes to a large portion of the stack,  from OpenStack APIs, down to the kernel.

Introducing New NFV Features – NUMA, CPU Pinning, Hugepages

The Network Functions virtualization (NFV) use case, which required adding support for NUMA, CPU Pinning, and Hugepages, is a good example of this implementation. To support these features, work began at the kernel level, both for memory management and KVM. Red Hat Enterprise Linux 7.0 kernel added support for 2M and 1G huge pages for KVM and virtual machines, and IOMMU support with huge pages. Work on the Kernel continued and with the Red Hat Enterprise Linux 7.1 kernel, adding support for advanced NUMA placement and dynamic huge pages per NUMA node.

In parallel with the features in Red Hat Enterprise Linux Kernel, changes were made to qemu-kvm-rhev to utilize these features and were exposed via the libvirt API and XML.

Red Hat engineers worked on getting the needed changes into the OpenStack Nova project to determine availability of huge pages on the host as well as assign them to individual VMs when requested. Nova was then enhanced so that users could define hugepages as a requirement for all VMs booted from a given image, via image properties, or for all VMs booted with a given flavor, via flavor extra specs. The scheduler was enhanced to track the availability of huge pages as reported by the compute service and then confirm that the VM is scheduled to a host capable of fulfilling the hugepages requirement specified.

Coordinating the support for these features across the entire stack, kernel -> qemu-kvm -> libvirt -> Nova-compute -> nova-schedule -> nova-api , required several different teams, working in several upstream communities, to work closely together. Thanks to Red Hat’s strong engineering presence in each of the respective communities, and the fact that most of these engineers were all within the same company, we were able to drive each of these features into the upstream code bases and coordinate backporting them to Red Hat Enterprise Linux and Red Hat Enterprise Linux OpenStack Platform so that they could be utilized together with the combination of RHEL 7.1 used as the base operating system for Red Hat Enterprise Linux OpenStack Platform 6 which is based on the upstream Juno release.

Supporting Changes  

Red Hat Enterprise Linux 7.0 and 7.1 also included numerous enhancements to better support Red Hat Enterprise Linux OpenStack Platform 6. Some of these  enhancements include Kernel changes in the core networking stack to better support VXLAN with  TCP Segmentation Offloading (TSO) and Generic Segmentation Offloading (GSO) which used to lead to guest crashes, fixed issued with dhcpclient sending requests over VXLAN Interfaces, SELinux policy fixes and enhancements for Glance image files and other services, enhancements fixing issues in qemu-kvm for librdb (Ceph), changes in iscsi-initiator-utils preventing hosts from potential hangs while reboot, and much more.


In order to implement an IaaS solution and provide API access to low level infrastructure components, OpenStack needs to be tightly integrated with the operating system it runs on, making the operating system a crucial factor for long-term OpenStack stability. Red Hat Enterprise Linux OpenStack Platform is co-engineered and integrated with various RHEL services and subsystems leading to an IaaS Cloud environment enterprise customers can trust. To provide the world class support Red Hat customers are used to, Red Hat is actively participating in upstream communities across all OpenStack projects, positioning Red Hat to be able to support OpenStack effectively across all components in use. Red Hat’s active participation in the upstream communities also enables Red Hat to introduce and drive new OpenStack features and functionalities requested by customers.

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

by Nir Yechiel — March 5, 2015

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

Red Hat Enterprise Linux OpenStack Platform 6 introduces support for single root I/O virtualization (SR-IOV) networking. This is done through a new SR-IOV mechanism driver for the OpenStack Networking (Neutron) Modular Layer 2 (ML2) plugin, as well as necessary enhancements for PCI support in the Compute service (Nova).

In this blog post I would like to provide an overview of SR-IOV, and highlight why SR-IOV networking is an important addition to RHEL OpenStack Platform 6. We will also follow up with a second blog post going into the configuration details, describing the current implementation, and discussing some of the current known limitations and expected enhancements going forward.

PCI Passthrough: The Basics

PCI Passthrough allows direct assignment of a PCI device into a guest operating system (OS). One prerequisite for doing this is that the hypervisor must support either the Intel VT-d or AMD IOMMU extensions. Standard passthrough allows virtual machines (VMs) exclusive access to PCI devices and allows the PCI devices to appear and behave as if they were physically attached to the guest OS. In the case of networking, it is possible to utilize PCI passthrough to dedicate an entire network device (i.e., physical port on a network adapter) to a guest OS running within a VM.

What is SR-IOV?

Single root I/O virtualization, officially abbreviated as SR-IOV, is a specification that allows a PCI device to separate access to its resources among various PCI hardware functions: Physical Function (PF) and one or more Virtual Functions (VF). SR-IOV provides a standard way for a single physical I/O device to present itself to the the PCIe bus as multiple virtual devices. While PFs are the full featured PCIe functions, VFs are lightweight functions that lack any configuration resources. The VFs configuration and management is done through the PF, so they can concentrate on data movement only. It is important to note that the overall bandwidth available to the PF is shared between all VFs associated with it.

In the case of networking, SR-IOV allows a physical network adapter to appear as multiple PCIe network devices. Each physical port on the network interface card (NIC) is being represented as a Physical Function (PF) and each PF can be associated with a configurable number of Virtual Functions (VFs). Allocating a VF to a virtual machine instance enables network traffic to bypass the software layer of the hypervisor and flow directly between the VF and the virtual machine. This way, the logic for I/O operations resides in the network adapter itself, and the virtual machines think they are interacting with multiple separate network devices. This allows a near line-rate performance, without the need to dedicate a separate physical NIC to each individual virtual machine. Comparing standard PCI Passthrough with SR-IOV, SR-IOV offers more flexibility.

Since the network traffic completely bypasses the software layer of the hypervisor, including the software switch typically used in virtualization environments, the physical network adapter is the one responsible to manage the traffic flows, including proper separation and bridging. This means that the network adapter must provide support for SR-IOV and implement some form of hardware-based Virtual Ethernet Bridge (VEB).

In Red Hat Enterprise Linux 7, which provides the base operating system for RHEL OpenStack Platform 6, driver support for SR-IOV network adapters has been expanded to cover more device models from known vendors such as Intel, Broadcom, Mellanox and Emulex. In addition, the number of available SR-IOV Virtual Functions has been increased for capable network adapters, resulting in the expanded capability to configure up to 128 VFs per PF for capable network devices.


SR-IOV in OpenStack

Starting with Red Hat Enterprise Linux OpenStack Platform 4, it is possible to boot a virtual machine instance with standard, general purpose PCI device passthrough. However, SR-IOV and PCI Passthrough for networking devices is available starting with Red Hat Enterprise Linux OpenStack Platform 6 only, where proper networking awareness was added.

Traditionally, a Neutron port is a virtual port that is typically attached to a virtual bridge (e.g., Open vSwitch) on a Compute node. With the introduction of SR-IOV networking support, it is now possible to associate a Neutron port with a Virtual Function that resides on the network adapter. For those Neutron ports, a virtual bridge on the Compute node is no longer required.

When a packet comes in to the physical port on the network adapter, it is placed into a specific VF pool based on the MAC address or VLAN tag. This lends to a direct memory access transfer of packets to and from the virtual machine. The hypervisor is not involved in the packet processing to move the packet, thus removing bottlenecks in the path. Virtual machine instances using SR-IOV ports and virtual machine instances using regular ports (e.g., linked to Open vSwitch bridge) can communicate with each other across the network as long as the appropriate configuration (i.e., flat, VLAN) is in place.

While Ethernet is the most common networking technology deployed in today’s data centers, it is also possible to use SR-IOV pass-through for ports using other networking technologies, such as InfiniBand (IB). However, the current SR-IOV Neutron ML2 driver supports Ethernet ports only.

Why SR-IOV and OpenStack?

The main motivation for using SR-IOV networking is to provide enhanced performance characteristics (e.g., throughput, delay) for specific networks or virtual machines. The feature is extremely popular among our telecommunications customers and those seeking to implement virtual network functions (VNFs) on the top of RHEL OpenStack Platform, a common use case for Network Functions Virtualization (NFV).

Each network function has a unique set of performance requirements. These requirements may vary based on the function role as we consider control plane virtualization (e.g., signalling, session control, and subscriber databases), management plane virtualization (e.g, OSS, off-line charging, and network element managers), and data plane virtualization (e.g., media gateways, routers, and firewalls). SR-IOV is one of the popular techniques available today that can be used in order to reach the high performance characteristics required mostly by data plane functions.

A Closer Look at RHEL OpenStack Platform 6

by Steve Gordon, Product Manager, Red Hat — February 24, 2015

Last week we announced the release of Red Hat Enterprise Linux OpenStack Platform 6, the latest version of our cloud solution providing a foundation for production-ready cloud. Built on Red Hat Enterprise Linux 7 this latest release is intended to provide a foundation for building OpenStack-powered clouds for advanced cloud users. Lets take a deeper dive into some of the new features on offer!

IPv6 Networking Support

IPv6 is a critical part of the promise of the cloud. If you want to connect everything to the network, you better plan for massive scale and have enough addresses to use. IPv6 is also increasingly important in the network functions virtualization (NFV) and telecommunication service provider space.

This release introduces support for IPv6 address assignment for tenant instances including those that are connected to provider networks; while IPv4 is more straight forward when it comes to IP address assignment, IPv6 offers some more flexibility and options to choose from. Both stateful and stateless DHCPv6 are supported, as well as the ability to use Stateless Address Autoconfiguration (SLAAC).

Neutron Routers High Availability

The neutron-l3-agent is the Neutron component responsible for layer 3 (L3) forwarding and network address translation (NAT) for tenant networks. This is a key piece of the project that hosts the virtual routers created by tenants and allows instances to have connectivity to and from other networks, including networks that are placed outside of the OpenStack cloud, such as the Internet.

Historically the neutron-l3-agent has been placed on a dedicated node or nodes, usually bare-metal machines referred to as “Network Nodes”. Until now, you could have to utilize multiple Network Nodes to achieve load sharing by scheduling different virtual routers on different nodes, but not high availability or redundancy between the nodes. The challenge that this model presented was that all the routing for the OpenStack happened in a centralized point. This introduced two main concerns:

  1. This makes each Network Node a single point of failure (SPOF)
  2. Whenever routing is needed, packets from the source instance have to go through a router in the Network Node and then sent to the destination. This centralized routing creates a resource bottleneck and an unoptimized traffic flow

This release endeavours to address these issues by adding high-availability to the virtual routers scheduled on the Network Nodes, so that when one router is failing, another can take over automatically. This is implemented using the well-known VRRP protocol internally. Highly-available Network Nodes are able to handle routing and centralized source NAT (SNAT) to allow instances to have basic outgoing connectivity, as well as advanced services such as virtual private networks or firewalls – which by design require seeing both directions of the traffic flow in order to operate properly.

Single root I/O virtualization (SR-IOV) networking

The ability to pass physical devices through to virtual machine instances, allowing for premium cloud flavors that provide physical hardware such as dedicated network interfaces or GPUs, was originally introduced in Red Hat Enterprise Linux OpenStack Platform 4. This release adds an SR-IOV mechanism driver (sriovnicswitch) to OpenStack networking to provide enhanced support for passing through networking devices that support SR-IOV.

This driver is available starting with Red Hat Enterprise Linux OpenStack Platform 6 and requires an SR-IOV enabled NIC on the Compute node. This driver allows for the assignment of SR-IOV VFs (Virtual Functions) directly into VM instances, so that the VM is communicating directly with the NIC controller, effectively bypassing the vSwitch . The Nova scheduler has also been enhanced to be able to consider not only device availability but the related external network connectivity when placing instances with specific networking requirements included in their boot request.

Support for Multiple Identity Backends

OpenStack Identity (Keystone) is usually integrated with an existing identity management system such as an LDAP server, when used in production environments. Using the default SQL identity backend is not an ideal choice for identity management, as it only provides basic password authentication, it lacks password policy support, and the user management capabilities are fairly limited. Configuring Keystone to use an existing identity store has its challenges, but some of the changes in RHEL OpenStack Platform 6 make this easier. RHEL OpenStack Platform 5 and earlier supported configuring Keystone with only one single identity backend. This means that all service accounts and all OpenStack users had to exist on the same identity management system. In real-world production scenarios, it is commonly required to use the identity store in read-only configuration, not intruding schema or use account changes, so accounts would be managed using native tools. Previously one of the challenges was that the OpenStack service accounts had to be stored on the same LDAP server with rest of the user accounts. In RHEL OpenStack Platform 6, it is possible to configure Keystone to use multiple identity backends. This allows Keystone to use an LDAP server to store normal user accounts and use SQL backend for storing OpenStack service accounts. In addition, this allows multiple LDAP servers to be used by a single Keystone instance when using Keystone Domains which previously worked only with the SQL identity backend.

Tighter Ceph Integration

The availability of Red Hat Enterprise Linux OpenStack Platform 6, based on OpenStack Juno, marks a particularly important milestone for Red Hat through the delivery of Ceph Enterprise 1.2 as a complete storage solution for Nova, Cinder, and Glance for virtual machine requirements.

This release introduces an advanced support for ephemeral and persistent storage featuring thin provisioning, snapshots, cloning, and copy-on-write.

  • With RHEL OpenStack Platform 6, VM storage functions can now be delivered transparently to the user on Ceph as customers can now run diskless compute nodes.
  • The new Ceph-backed ephemeral volumes enable the data to remain situated within the Ceph cluster allowing the VM to boot more quickly without data moving across the network. This also means that snapshots of the ephemeral volume can be performed on the Ceph cluster instantaneously and then put into the Glance library, without data migration across the network. Now VM storage functions can be delivered transparently to the user on Ceph.

The Ceph RBD drivers are now shipped by default with RHEL OpenStack Platform 6 and configured through a single, integrated installer that simplifies and speeds deployment of Ceph as part of the OpenStack deployment.

Interested in trying the latest OpenStack-based cloud platform from the world’s leading provider of open source solutions? Download a free evaluation at:

Accelerating OpenStack adoption: Red Hat Enterprise Linux OpenStack Platform 6!

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — February 19, 2015

On Tuesday February 17th, we announced the general availability of Red Hat Enterprise Linux OpenStack Platform 6, Red Hat’s fourth release of the commercial OpenStack offering to the market.

Based on the community OpenStack “Juno” release and co-engineered with Red Hat Enterprise Linux 7, the enterprise-hardened Version 6 is aimed at accelerating the adoption of OpenSack among enterprise businesses, telecommunications companies, Internet service providers (ISPs), and public cloud hosting providers.

Since the first version released in July 2013, the “design principles” of Red Hat Enterprise Linux OpenStack Platform product offering are:

1. Co-engineer with Red Hat Enterprise Linux and KVM to enable capabilities for a well-managed IaaS implementation

OpenStack is a set of software services that requires a hypervisor to run virtual machines, as well as manage resources like CPU, memory, networking, storage, security, and hardware drivers. And OpenStack services have a complex set of user-space dependencies on the underlying operating system, just like any other application.So that each of these required components can function at the fullest capacity together, we purposefully engineer Red Hat Enterprise Linux OpenStack Platform to combine the world’s most trusted, secure, and proven Linux distribution—Red Hat Enterprise Linux —with Red Hat’s rigorously tested OpenStack technology. To meet the enterprise need for predictable life cycle for support and maintenance, Red Hat Enterprise Linux OpenStack Platform brings together innovation across hypervisor, operating system, and OpenStack technologies while creating a stable platform for production deployments.

2. Deliver a single production-ready distribution to meet enterprise and telco needs

Network Functions Virtualization has emerged as a key strategic initiative among Telcos across the globe. To meet the performance and deterministic characteristics of NFV use cases, we’re committed to driving open innovation across OS, hypervisor, and OpenStack layers to make OpenStack enterprise and Telco ready. In Red Hat Enterprise Linux OpenStack Platform 6, features such as IPV6, SR-IOV networking, Neutron high availability (in “active-active” mode), and vCPU configurability are testament to this. In addition, we’re excited about our collaborations with top-tier network equipment providers such as Alcatel-Lucent, Nokia, Huawei, and NEC to jointly innovate and enable accelerated adoption of OpenStack to meet Telco/NFV requirements.

3. Enable broadest OpenStack partner ecosystem

Announced in April 2013, the Red Hat OpenStack Cloud Infrastructure Partner Network includes partners providing compute, storage, networking, management and ISV solutions certified around Red Hat Enterprise Linux OpenStack Platform. Over the past 22 months, the Cloud Infrastructure Partner Network has grown to more than 235 partners, representing nearly 1000 solutions. In addition, we’re excited about the strategic collaborations to jointly engineer solutions with industry leaders such as Cisco and Dell and thereby support broad-based adoption of OpenStack.

Red Hat Enterprise Linux OpenStack Platform 6 adds over 700 feature/functionality enhancements, bug fixes, documentation changes, and security updates, all focused on creating a stable and production-ready cloud platform. Backed by the extensive ecosystem support and the breadth of training & certification and services offerings, we’re looking forward to promoting accelerated adoption of OpenStack across the globe.

For additional details and a deeper dive on the release, please visits blog entries from my colleagues at

Radhesh Balakrishnan

General Manager, OpenStack

Red Hat

Red Hat Enterprise Virtualization 3.5 transforms modern data centers that are built on open standards

by Raissa Tona, Principal Product Marketing Manager, Red Hat — February 13, 2015

This week we announced the general availability of Red Hat Enterprise Virtualization 3.5. Red Hat Enterprise Virtualization 3.5 allows organizations to deploy an IT infrastructure that services traditional virtualization workloads while building a solid base for modern IT technologies.

Because of its open standards roots, Red Hat Enterprise Virtualization 3.5 enables IT organizations to more rapidly deliver and deploy transformative and flexible technology services in 3 ways:

  • Deep integration with Red Hat Enterprise Linux
  • Delivery of standardized services for mission critical workloads
  • Foundation for future looking, innovative, and highly flexible cloud enabled workloads built on OpenStack

Deep integration with Red Hat Enterprise Linux

Red Hat Enterprise Virtualization 3.5 is co-engineered with Red Hat Enterprise Linux including the latest version, Red Hat Enterprise Linux 7, which is built to meet modern data center and next-generation IT requirements. Due to this tight integration, Red Hat Enterprise Virtualization 3.5 inherits the innovation capabilities of the world’s leading enterprise Linux platform.

For customers looking to maximize the benefits of their virtualized infrastructure, Red Hat offers Red Hat Enterprise Linux with Smart Virtualization, a combined solution of Red Hat Enterprise Linux and Red Hat Enterprise Virtualization. This offering combines the performance, scalability, reliability and security features of Red Hat Enterprise Linux with the advanced virtualization management capabilities of Red Hat Enterprise Virtualization. The highly scalable Red Hat Enterprise Virtualization 3.5 can support [4TB of memory per host, 4 TB of vRAM, and 128 vCPUs per virtual machine. To further enhance infrastructure scalability, Red Hat Enterprise Virtualization 3.5 includes:

  • Non-uniform memory access (NUMA) Support extended to Host NUMA, Guest Pinning and Virtual NUMA. The NUMA support allows customers to deploy highly scalable workloads with improved performance and minimizes resource overload related to physical memory access times.

Delivery of standardized services for mission critical workloads

Because consistency of operations across a common infrastructure is essential for mission critical applications, Red Hat Enterprise Virtualization 3.5 enables administrators to develop automated standards and processes across the infrastructure. With Red Hat Enterprise Virtualization 3.5, IT organizations can simplify and gain greater visibility into provisioning, configuring and monitoring of their virtualization infrastructure. Notable new features in Red Hat Enterprise Virtualization 3.5 that enhance standardization are:

  • oVirt Optimizer Integration, provides advanced real-time analytics of Red Hat Enterprise Virtualization workloads and identifies the balance of resource allocation that best meets the users’ needs while provisioning new virtual machines.
  • Improved storage domain handling for disaster recovery, provides support for migrating storage domains between different datacenters supported by Red Hat Enterprise Virtualization, enabling partner technologies to deliver site recovery capabilities.
  • Host Integration with Red Hat Satellite, adds the capability to provision and to add hypervisors to Red Hat Enterprise Virtualization from bare metal. The integration automates and enhances the lifecycle management of physical hypervisor hosts.

Foundation for future looking, innovative, and highly flexible cloud enabled workloads built on OpenStack

Red Hat Enterprise Virtualization is integrated and shares common services with OpenStack Glance and Neutron Services (tech preview). This integration allows administrators to break down silos and to deploy resources once across the infrastructure. Red Hat Enterprise Virtualization 3.5 allows administrators to define instance types that unify the process of provisioning virtual machines for both virtual and cloud enabled workloads.

Under Red Hat Cloud Infrastructure (a combination of Red Hat Enterprise Virtualization, Red Hat Enterprise Linux and Red Hat Enterprise Linux OpenStack Platform), Red Hat Enterprise Virtualization is a cornerstone platform for building solutions that customers can easily deploy across their physical, virtual and cloud platforms without sacrificing performance and scalability.

The flexible and open capabilities of Red Hat Enterprise Virtualization lay the foundation for a transition to an innovative and optimized IT infrastructure that can service modern data center IT requirements.

Additional Resources
Learn more about Red Hat Enterprise Virtualization 3.5
Learn more about Red Hat Enterprise Linux with Smart Virtualization
Learn more about Red Hat Cloud Infrastructure

IBM and Red Hat Join Forces to Power Enterprise Virtualization

by adamjollans — December 16, 2014

Adam Jollans is the Program Director  for Cross-IBM Linux and Open Virtualization Strategy
IBM Systems & Technology Group

IBM and Red Hat have been teaming up for years. Today, Red Hat and IBM are announcing a new collaboration to bring Red Hat Enterprise Virtualization to IBM’s next-generation Power Systems through Red Hat Enterprise Virtualization for Power.

A little more than a year ago, IBM announced a commitment to invest $1 billion in new Linux and open source technologies for Power Systems. IBM has delivered on that commitment with the next-generation Power Systems servers incorporating the POWER8 processor which is available for license and open for development through the OpenPOWER Foundation. Designed for Big Data, the new Power Systems can move data around very efficiently and cost-effectively. POWER8’s symmetric multi-threading provides up to 8 threads per core, enabling workloads to exploit the hardware for the highest level of performance.

Red Hat Enterprise Virtualization combines hypervisor technology with a centralized management platform for enterprise virtualization. Red Hat Enterprise Virtualization Hypervisor, built on the KVM hypervisor, inherits the performance, scalability, and ecosystem of the Red Hat Enterprise Linux kernel for virtualization. As a result, your virtual machines are powered by the same high-performance kernel that supports your most challenging Linux workloads. Read the full post »

Co-Existence of Containers and Virtualization Technologies

by Federico Simoncelli — November 20, 2014

By, Federico Simoncelli, Principal Software Engineer, Red Hat

As a software engineer working on the Red Hat Enterprise Virtualization (RHEV), my team and I are driven by innovation; we are always looking for cutting edge technologies to integrate into our product.

Lately there has been a growing interest in Linux containers solutions such as Docker. Docker provides an open and standardized platform for developers and sysadmins to build, ship, and run distributed applications. The application images can be safely held in your organization registry or they can be shared publicly in the docker hub portal ( for everyone to use and to contribute to.

Linux containers are a well-known technology that runs isolated Linux systems on the same host sharing the same kernel and resources as cpu time and memory. Containers are more lightweight, perform better and allow more density of instances compared to full virtualization where virtual machines run dedicated full kernels and operating systems on top of virtualized hardware. On the other hand virtual machines are still the preferred solution when it comes to running highly isolated workloads or different operating systems than the host.

Read the full post »

Empowering OpenStack Cloud Storage: OpenStack Juno Release Storage Overview

by Sean Cohen, Principal Technical Product Manager, Red Hat — November 19, 2014
Wind Energy

 License: CC0 Public Domain

The OpenStack 10th release added ten new storage backends and improved testing on third-party storage systems. The Cinder block storage project continues to mature each cycle exposing more and more Enterprise cloud storage infrastructure functionalities.

Here is a quick overview of some of these key features.

Simplifying OpenStack Disaster Recovery with Volume Replication

After introducing a new Cinder Backup API to allow export and import backup service metadata in the Icehouse release, which allowed “electronic tape shipping” style backup-export & backup-import capabilities to recover OpenStack cloud deployments, the next step for Disaster Recovery enablement in OpenStack is the foundation of volume replication support at block level.

Read the full post »

Simplifying and Accelerating the Deployment of OpenStack Network Infrastructure

by Valentina — November 18, 2014

plumgrid logo


The energy from the latest OpenStack Summit in Paris is still in the air. Its record attendance and vibrant interactions are a testimony of the maturity and adoption of OpenStack across continents, verticals and use cases.

It’s especially exciting to see its applications growing outside of core datacenter use cases with Network Function Virtualization being top of mind for many customers present at the Summit.

If we look back at the last few years, a fundamental role fueling OpenStack adoption has been played by the distributions which have taken the project OpenStack and helped turn it into an easy to consume, supported, enterprise-grade product.

At PLUMgrid we have witnessed this transformation summit after summit, customer deployment after customer deployment. Working closely with our customers and our OpenStack partners we can attest how much easier, smoother, simpler an OpenStack deployment is today.

Similarly, PLUMgrid wants to simplify and accelerate the deployment of OpenStack network infrastructure, especially for those customers that are going into production today and building large-scale environments.

If you had the pleasure to be at the summit you have learnt about all the new features that were introduced in Juno for the OpenStack networking component (and if not check out this blog which provides a good summary of all Juno’s networking feature).

Read the full post »

Delivering Public Cloud Functionality in OpenStack

by John Meadows, Vice President of Business Development, Talligent — November 14, 2014



When it comes to delivering cloud services, enterprise architects have a common request to create a public cloud-type rate plan for showback, chargeback, or billing. Public cloud packaging is fairly standardized across the big vendors as innovations are quickly copied by others and basic virtual machines are assessed mainly on price. (I touched on the concept of the ongoing price changes and commoditization of public clouds in an earlier post.) Because of this standardization and relative pervasiveness, public cloud rate plans are well understood by cloud consumers. This makes them a good model for introducing enterprise users to new cloud services built on OpenStack.Enterprise architects are also highly interested in on-demand, self-service functionality from their Openstack clouds in order to imitate the immediate response of public clouds. We will cover how to deliver on-demand cloud services in a future post.

Pricing and Packaging Cloud Services
Public cloud rate plans are very popular, seeing adoption within enterprises, private hosted clouds, and newer public cloud providers alike. Most public cloud providers use the typical public cloud rate plan as a foundation for layering on services, software, security, and intangibles like reputation to build up differentiated offerings.Enterprise cloud architects use similar rate plans to demonstrate to internal customers that they can provide on-demand, self-service cloud services at a competitive price. To manage internal expectations and encourage good behavior, enterprises usually introduce cloud pricing via a showback model which does not directly impact budgets or require exchange of money. Users learn cloud cost structures and the impact of their resource usage. Later, full chargeback can be applied where internal users are expected to pay for services provided.

Read the full post »

OpenStack 2015 – The Year of the Enterprise?

by Nir Yechiel — November 10, 2014

OpenStackSummit Paris 2014This post is the collective work of all the Red Hat Enterprise Linux OpenStack Platform Product Managers who attended the summit.

The 11th Openstack design summit that took place last week for the first time in Europe, brought about 6000 participants of the OpenStack community to Paris to kick off the design for the “Kilo” release.

If 2014 was the year of the “Superuser”, then clearly the year 2015 seems to be about the “Year of the Enterprise“.  The big question is: are we ready for enterprise mass adoption?

More than year ago, at the Openstack Havana design summit, it was clear that although interest in deploying OpenStack was growing, most enterprises were still holding back, mainly due to the lack of maturity of the project. This OpenStack summit, the new cool kid in the Open Cloud infrastructure playground is finally starting to show real maturity signs.

An important indicator for this is the increased number of deployments. The Kilo summit showcased about 16 different large organizations using production workloads on OpenStack, including companies such as BBVA Bank, SAP SE (formerly SAP AG) & BMW.

Read the full post »

OpenStack Summit – Why NFV Really Matters

by David H. Deans — November 6, 2014

I’ve been following the news releases and other storylines that have emerged from the ongoing proceedings at the OpenStack Summit in Paris, France. Some key themes have surfaced. In my first editorial, I shared reasons why the market has matured. In my second story, I observed how simplification via automation would broaden the addressable market for hybrid cloud services.

The other key theme that has emerged is the increased focus on telecom network operator needs and wants – specifically, the primary telco strategies that are evolving as they continue to build-out their hyperscale cloud infrastructures.

This is my domain. I’ve invested most of my professional life working for, or consulting with, domestic and international communication service providers. I’ve been actively involved in the business development of numerous wireline and wireless services, within both the consumer and commercial side of the marketplace. During more than two decades of experience, it’s been an amazing journey.

The closely related Technology, Media and Telecommunications (TMT) industries are already undergoing a transformation, as innovative products or services are developed by collaborative teams of creative contributors and brought to market at an accelerated rate.

Read the full post »

Red Hat Cloud Infrastructure 5 Now Available

by Maria Gallegos, Principal Product Marketing Manager, Red Hat — November 5, 2014

Gordon Tillmore, Red Hat
Earlier this week, we announced the release of Red Hat Cloud Infrastructure 5.  Customers can use  this recent release to move towards open hybrid cloud working alongside existing infrastructure investments, and allowing for workload portability from a customer’s private cloud to Amazon EC2, or the reverse, if desired.   The product is our Infrastructure-as-a-Service solution providing:

  • a flexible and open solution to build out a centrally managed heterogeneous virtualization environment,
  • a private cloud for traditional workloads based on virtualization technologies, and
  • a massively scalable OpenStack-based cloud for cloud-enabled workloads

Version 5 -an important release for Red Hat Cloud Infrastructure
Version 4 already included three tightly integrated Red Hat technologies: Red Hat CloudForms, an award winning Cloud Management Platform (CMP), Red Hat Enterprise Virtualization, a full-featured enterprise virtualization solution, and Red Hat Enterprise Linux OpenStack Platform, our fully supported, enterprise grade OpenStack offering.  Red Hat Enterprise Linux has also been a key ingredient, serving as the basis for Red Hat Enterprise Virtualization and Red Hat Enterprise Linux OpenStack Platform, as well as a guest operating system at the tenant layer. And now, with Red Hat Cloud Infrastructure 5, Red Hat is introducing Satellite 6 to it’s award winning cloud infrastructure. Satellite 6 is accessible with no extra cost, to help organizations better manage the lifecycle of their cloud infrastructure.

Read the full post »

OpenStack, Paris Summit: Day One Insights

by David H. Deans — November 4, 2014

Depending on your point of view, there are different ways to assess the progress of the evolving OpenStack project. Yesterday, I profiled “three reasons” why I believe there are encouraging signs that demonstrate how OpenStack has matured — and I gave an example of existing application case studies, as a key indicator.

I prefer to view the OpenStack upside potential through the lens of a business innovation consultant, where the technology is a means to an end – that being a desired commercial transformation. I referred to “superior digital business processes” as a primary motivation for exploring cloud computing services. So, what do I foresee, and how did I become fascinated by this particular topic?

I believe that today’s Global Networked Economy will lower any remaining geographic boundaries that may have previously limited competition in those industries that, to date, were largely untouched by the disruption made possible by the public Internet. The nascent Internet of Things has my attention – I want to be prepared for whatever comes next.

Freedom to Innovate with Cloud Services

Read the full post »

Three Reasons Why OpenStack has Matured

by David H. Deans — November 3, 2014

The OpenStack Summit, taking place in Paris, France this week, will be a turning point for those of us that study market development activity within the cloud computing infrastructure marketplace. I attended my first OpenStack Summit earlier this year, in Atlanta, Georgia. During the event conference sessions, I was immediately engaged by the apparent enthusiasm and energy of the other attendees.

You know, it’s true; people that are driven by a strong sense of purpose really do radiate a high level of passion for their cause that can become somewhat contagious. It’s hard to resist a positive outlook.

That said, I’m not easily swayed by buzz or hype. As a consultant with nearly three decades of technology business experience, I tend to carefully consider all the facts before I offer an opinion. Most of my experience is within the telecom sector, so I was drawn to the conference sessions that focused on the business challenges that I knew very well. Upon returning home from the Atlanta Summit, I wrote a story about my observations; it was entitled “Exploring OpenStack cloud case studies.”

How the OpenStack Market has Evolved

I’ve observed several encouraging developments since the Atlanta Summit that I believe demonstrate the OpenStack market has now matured to a point where the next wave of enterprise user adoption will start to occur. As we enter 2015, I’ll also share periodic updates on my market assessment.

In the past, there have been numerous reports in the trade media that a lack of skilled and experienced cloud-savvy technical talent has limited some IT organizations from acting on the cloud service pilot request of an internal constituent. This scenario has helped to fuel the Shadow IT phenomena, where public cloud services are procured and used directly by impatient Line of Business (LoB) leaders.

Vendors in the cloud computing community have responded, by offering the support resources required by CIOs and IT managers – essentially creating the environment to address the staffing and skills demand in the marketplace. As an example, more OpenStack training classes are now available, and the associated skills certification process ensures that the graduating students are prepared for the most common use cases. Read the full post »

Red Hat, Nuage Networks, OpenStack, and KISS

by Scott Drennan — October 29, 2014
and Nir Yechiel

Nuage Networks logo


The reality is that IT is serious money – IDC estimates that the Internet of Things (IoT) market alone will hit $7.1 trillion by 2020![1]  But a lot of that money is due to the IT industry practice of “lock-in” – trapping a customer into a proprietary technology and then charging high costs, in some instances up to 10X cost, for every component   For some reason, customers object to having to pick one vendor’s approach, being subject to limitations – whether technological or otherwise, paying high markups for every incremental extension, then having to pay high switching costs for the next solution at end of life in five years or less.

As a consequence, many of those customers are taking a good, hard look at open source software (OSS) that can minimize vendor lock-in. OSS communities also encourage the development of software solutions that run on industry-standard and reasonably priced hardware. In particular, OpenStack has been well received by businesses of all sizes, and the OpenStack community is growing by leaps-and-bounds with 625% more participating developers and 307% more business members as of its fourth birthday![2] Since OpenStack can orchestrate operations for an entire datacenter, it offers a vision of the future where  customers are free from server, network, and storage lock-in.

However, legacy naysayers have always articulated three catches with OSS:
1)    Making it enterprise-grade in terms of scalability, reliability, and security
2)    Ensuring that the code base grows over time so that others can move the ball forward
3)    Getting enterprise-class support for the code base Read the full post »

Delivering the Complete Open-Source Cloud Infrastructure and Software-Defined-Storage Story

by neilwlevine — October 24, 2014

Authored by Neil Levine, Director Product Marketing, Red Hat and Sean Cohen, Principal Technical Product Manager, Red Hat

The OpenStack summit in Paris not only marks the release of Juno to the public but also the 6 month mark since Red Hat acquired Inktank, the commercial company behind Ceph. The acquisition not only underscored Red Hat’s commitment to use open source to disrupt the storage market, as it did in the operating system market with Linux, but also its investment in OpenStack where Ceph is a market leading scale-out storage platform, especially for block.

Even prior to the acquisition, Inktank’s commercial product – Inktank Ceph Enterprise – had been certified with Red Hat Enterprise Linux OpenStack Platform and over the past 6 months, the product teams have worked to integrate the two products even more tightly.
Delivering the complete Open-Source Cloud Infrastructure and Software-Defined-Storage story
The first phase of this work has been focused on simplifying the installation experience. The new Red Hat Enterprise Linux OpenStack Platform installer now handles configuration of the Ceph components on the controller and compute side, from installing the packages to configuring Cinder, Glance and Nova to creating all the necessary authentication keys. With the Ceph client-side components now directly available in RHEL OpenStack Platform, much of what was a manual effort has now been transformed & automated. In addition the RHEL OpenStack Platform installer also takes responsibility for the configuration of the storage cluster network topology and will boot and configure the hosts that will be used by the Ceph storage cluster.

The Inktank Ceph Enterprise installer has also been modified to take pre-seeded configuration files from RHEL OpenStack Platform and use them to build out the storage cluster. With some of the Ceph services architected to run co-resident on the controller nodes, the number of physical nodes needed has been reduced without sacrificing security of performance.
Read the full post »

OpenStack Summit Paris: Agenda Confirms 22 Red Hat Sessions

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — September 26, 2014

As this Fall’s OpenStack Summit in Paris approaches, the Foundation has posted the session agenda, outlining the schedule of events. With an astonishing 1,100+ sessions submitted for review, I was happy to see that Red Hat and eNovance have a combined 22 sessions that are included in the weeks agenda, with two more as alternates.

As I’ve mentioned in the past, I really respect the way the Foundation goes about setting the agenda – essentially deferring to the attendees and participants themselves, via a vote. Through this voting process, the subjects that are “top-of-mind” and of most interest in learning more about are brought to the surface, resulting in a very current and cutting edge set of discussions. And with so many accepted sessions, it again confirms that Red Hat, and now eNovance, are involved in some of the most current projects and technologies that the community is most interested in. Read the full post »

Free webinar on the Heat orchestration service

by Maria Gallegos, Principal Product Marketing Manager, Red Hat —

On Tuesday, September 30, we will presenting a Taste of Red Hat Training webinar dedicated to Heat, the Red Hat Enterprise Linux OpenStack Platform orchestration service that allows you to run multiple composite cloud applications. There will be two live sessions of the webinar run that day, at 9 am EST and 2 pm EST to accommodate the usual international audience.

Join Red Hat curriculum developer, Adolfo Vazquez, as he teaches you about the basics of the Heat orchestration service in Red Hat Enterprise Linux OpenStack Platform, the Heat core services, and how to configure applications on the OpenStack infrastructure. Content for the webinar is pulled directly from our popular Red Hat OpenStack Administration (CL210) course.

Click here for more information and to register.

Announcing Red Hat Enterprise Virtualization 3.5 Beta

by Raissa Tona, Principal Product Marketing Manager, Red Hat — September 18, 2014

Today, we are excited to announce the availability of Red Hat Enterprise Virtualization 3.5 Beta to existing Red Hat Enterprise Virtualization customers. The Beta release allows our customers to easily manage and automate many virtualization tasks while providing an on-ramp to accommodate cloud enabled workloads based on OpenStack. Red Hat Enterprise Virtualization 3.5 Beta provides new features across compute, storage, network, and infrastructure.

One key feature to highlight is the full integration with OpenStack Glance and Neutron services. This feature was previously in tech preview. The strong integration between Red Hat Enterprise Virtualization and OpenStack enables customers to eliminate silos and scale up to meet business demands.

Red Hat Enterprise Virtualization 3.5 Beta is available to all existing customers with active Red Hat Enterprise Virtualization subscriptions today. Please view the 3.5 Beta Installation Guide here for details on how to start testing the beta release.

Please note that RHEV 3.5 Beta 1 will not support the use of the RHEV-H Hypervisor and will only support the use of a RHEL Hypervisor Host.  We apologize for this delay, and plan on the RHEV-H Hypervisor to be available in the RHEV 3.5 Beta 2 refresh.

Also note that there was a last second issue identified with the dwh component that prevents its installation in RHEV 3.5 Beta 1.  This will be resolved in the RHEV 3.5 Beta 2 refresh.

What’s Coming in OpenStack Networking for Juno Release

by Nir Yechiel — September 11, 2014

Neutron, historically known as Quantum, is the OpenStack project focused on delivering networking as a service. As the Juno development cycle ramps up, now is a good time to review some of the key changes we saw in Neutron during this exciting cycle and have a look at what is coming up in the next upstream major release which is set to debut in October.

Neutron or Nova Network?

The original OpenStack Compute network implementation, also known as Nova Network, assumed a basic model of performing all isolation through Linux VLANs and iptables. These are typically sufficient for small and simple networks, but larger customers are likely to have more sophisticated network requirements. Neutron introduces the concept of a plug-in, which is a back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests and offer a rich set of network topologies, including network overlays with protocols like GRE or VXLAN, and network services such as load balancing, virtual private networks or firewalls that plug into OpenStack tenant networks. Neutron also enables third parties to write plug-ins that introduce advanced network capabilities, such as the ability to leverage capabilities from the physical data center network fabric, or use software-defined networking (SDN) approaches with protocols like OpenFlow. One of the main Juno efforts is a plan to enable easier Nova Network to Neutron migration for users that would like to upgrade their networking model for the OpenStack cloud.

Performance Enhancements and Stability

The OpenStack Networking community is actively working on several enhancements to make Neutron a more stable and mature codebase. Among the different enhancements, recent changes to the security-group implementation should result in significant improvement and better scalability of this popular feature. To recall, security groups allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a Neutron port, effectively creating an instance-level firewall filter. You can read this great post by Miguel Angel Ajo, a Red Hat employee who led this effort in the Neutron community, to learn more about the changes.

In addition, there are continuous efforts to improve the upstream testing framework, and to create a better separation between unit tests and functional tests, as well as better testing strategy and coverage for API changes.
Read the full post »

OpenStack Resources for the vAdmin

by Raissa Tona, Principal Product Marketing Manager, Red Hat — September 8, 2014

Across many enterprise organizations, IT is driving innovation that allows companies to be more agile and gain a competitive edge. These are exciting times for the Vadmins who are at the center of this change. This innovation starts with bridging the gap between traditional virtualization workloads and cloud-enabled workloads based on OpenStack.

Organizations are embracing OpenStack because it allows them to more rapidly scale to meet evolving user demands without sacrificing performance on a stable and flexible platform and at a cost effective level.

As a Vadmin, you might be asking yourself how OpenStack fits in your world of traditional virtualization workloads. The answer is that OpenStack is not a replacement rather it is an extension to traditional virtualization platforms.

To help vAdmins get started with OpenStack, we have created a dedicated page with numerous OpenStack resources including a solutions guide that explains the architectural differences between OpenStack and VMware vSphere, as well as an appliance that allows you to quickly run and deploy OpenStack in your VMware vSphere environment.

Visit this OpenStack Resources vAdmin page to learn how to get started with OpenStack in your existing infrastructure today.

Red Hat Enterprise Virtualization 3.4.1 Released

by Scott Herold — August 25, 2014

Principal Product Manager, Red Hat

I don’t often find myself getting overly excited about maintenance releases, however Red Hat Enterprise Virtualization 3.4.1 is an exception due to two key factors:

  • Preview support for Red Hat Enterprise Linux 7 as a hypervisor host
  • Support for up to 4,000 GB memory in a single virtual machine

Red Hat Enterprise Virtualization 3.4, originally introduced official guest operating system support for Red Hat Enterprise Linux (RHEL) 7. In continuing down the path of providing the latest Red Hat technologies to our customers, I am proud to announce that Red Hat Enterprise Virtualization 3.4.1 has preview support for RHEL 7 as a hypervisor.  Red Hat customers with active subscriptions will be able to take advantage of using RHEL 7 as a hypervisor either as a RHEL host, or by using our thin Red Hat Enterprise Virtualization Hypervisor image.

Read the full post »

Juno Updates – Security

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — August 5, 2014

Written by Nathan Kinder


There is a lot of development work going on in Juno in security related areas. I thought it would be useful to summarize what I consider to be some of the more notable efforts that are under way in the projects I follow.


Nearly everyone I talk with who is using Keystone in anger is integrating it with an existing identity store such as an LDAP server. Using the SQL identity backend is really a poor identity management solution, as it only supports basic password authentication, there is lack of password policy support, and the user management capabilities are fairly limited. Configuring Keystone to use an existing identity store has it’s challenges, but some of the changes in Juno should make this easier. In Icehouse and earlier, Keystone can only use one single identity backend. This means that all regular users and service users must exist in the same identity backend. In many real-world scenarios, the LDAP server used for users and credentials is considered to be read-only by anything other than the normal user provisioning tools. A common problem is that the OpenStack service users are not wanted in the LDAP server. In Juno, it will be possible to configure Keystone to use multiple identity backends. This will allow a deployment to use an LDAP server for normal users and the SQL backend for service users. In addition, this should allow multiple LDAP servers to be used by a single Keystone instance when using Keystone Domains (which previously only worked with the SQL identity backend).

Read the full post »

Session Voting Now Open, for OpenStack Summit Paris!

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — July 31, 2014

The voting polls for speaking sessions at this Fall’s OpenStack Summit in Paris, France are now open to the public. This time around it seems Red Hatters are looking to participate in more sessions then any previous Summit, helping to share innovation happening at Red Hat and in the greater community.

With an incredible quantity of sessions submitted this Summit, we’ve got quite a diverse selection for you to vote on. Spanning from low-level core compute, networking, and storage sessions, to plenty of customer success stories and lessons learned.

Each and every vote counts, so please have a look through the Red Hat submitted sessions below and vote for your favorites! If you’re new to the voting process, you must sign up for a free OpenStack Foundation member username and cast your votes. Visit the foundation site here, to sign up for free!

Once you’ve signed up as a member, click on the titles below to cast your vote. Remember, voting closes on Wednesday August 6th.

Have a look at our sessions here and cast your vote! I’ve sorted by category:


  1. OpenStack Storage APIs and Ceph: Existing Architectures and Future Features
  2. Deployment Best Practices for OpenStack Software-Defined Storage with Ceph
  3. What’s New in Ceph?
  4. OpenStack and Ceph – Match Made in the Cloud
  5. Large Scale OpenStack Block Storage with Containerized Ceph
  6. Red Hat Training: Using Ceph and Red Hat Storage Server in Cinder
  7. Volume Retyping and Cinder Backend Configuring
  8. Using OpenStack Swift for Extreme Data Durability
  9. Ask the Experts: Challenges for OpenStack Storage
  10. Deploying Red Hat Block and Object Storage with Mellanox and Red Hat Enterprise Linux OpenStack Platform
  11. Vanquish Performance Bottlenecks and Deliver Resilient, Agile Infrastructure, with All Flash Storage and OpenStack
  12. GlusterFS: The Scalable Open Source Backend for Manila
  13. Delivering Elastic Big Data Analytics with OpenStack Sahara and Distributed Storage
  14. Deploying Swift on a Scale-Out File System

Read the full post »

Juno Preview for OpenStack Compute (Nova)

by russellbryant — July 10, 2014

Originally posted on

We’re now well into the Juno release cycle. Here’s my take on a preview of some of what you can expect in Juno for Nova.


One area receiving a lot of focus this cycle is NFV. We’ve started an upstream NFV sub-team for OpenStack that is tracking and helping to drive requirements and development efforts in support of NFV use cases. If you’re not familiar with NFV, here’s a quick overview that was put together by the NFV sub-team:

NFV stands for Network Functions Virtualization. It defines the
replacement of usually stand alone appliances used for high and low
level network functions, such as firewalls, network address translation,
intrusion detection, caching, gateways, accelerators, etc, into virtual
instance or set of virtual instances, which are called Virtual Network
Functions (VNF). In other words, it could be seen as replacing some of
the hardware network appliances with high-performance software taking
advantage of high performance para-virtual devices, other acceleration
mechanisms, and smart placement of instances. The origin of NFV comes
from a working group from the European Telecommunications Standards
Institute (ETSI) whose work is the basis of most current
implementations. The main consumers of NFV are Service providers
(telecommunication providers and the like) who are looking to accelerate
the deployment of new network services, and to do that, need to
eliminate the constraint of slow renewal cycle of hardware appliances,
which do not autoscale and limit their innovation.

NFV support for OpenStack aims to provide the best possible
infrastructure for such workloads to be deployed in, while respecting
the design principles of a IaaS cloud. In order for VNF to perform
correctly in a cloud world, the underlying infrastructure needs to
provide a certain number of functionalities which range from scheduling
to networking and from orchestration to monitoring capacities. This
means that to correctly support NFV use cases in OpenStack,
implementations may be required across most, if not all, main OpenStack
projects, starting with Neutron and Nova.

Read the full post »

OpenStack Summit, Atlanta 2014: Year of the superuser?

by Steve Gordon, Product Manager, Red Hat — June 3, 2014

The OpenStack community gathered recently in Atlanta to define the roadmap for the upcoming Juno release cycle and reflect on Icehouse. Icehouse is the release that forms the basis of the upcoming Red Hat Enterprise Linux OpenStack Platform 5, a beta for which was announced during the week.

The biannual summit moved back to North America and again increased in size with some 4500 stackers in attendance, up from 3500 in Hong Kong only six months ago. The OpenStack Foundation again handled this with aplomb, organizing an excellent event in the spacious Georgia World Congress Center.

2014, year of the superuser?

The increased presence of OpenStack superusers at this summit was hard to miss with several keynote appearances including AT&T, Disney, Sony, and Wells Fargo as well as many other users leading or participating in general summit sessions. A convenient youtube playlist listing these user-led sessions has since been made available. The OpenStack Foundation also recently launched the publication to coincide with this renewed push to bring users forward.


Read the full post »


Get every new post delivered to your Inbox.

Join 57 other followers