How Red Hat’s OpenStack partner Networking Solutions Offer Choice and Performance

by Jonathan Gershater — August 31, 2015

Successfully implementing an OpenStack cloud is more than just choosing an OpenStack distribution. With its community approach and rich ecosystem of vendors, OpenStack represents a viable option for cloud administrators who want to offer public-cloud-like infrastructure services in their own datacenter.  Red Hat Enterprise Linux OpenStack Platform offers pluggable storage and networking options.  This open approach is contrary to closed solutions such as VMware Integrated OpenStack (VIO) which only supports VMware NSX for L4-L7  networking or VMware Distributed switch for basic L2 networking .

Below are some of the networking partners who have certified their OpenStack Networking plugins with Red Hat Enterprise Linux OpenStack Platform and will be on display at VMworld 2015, San Francisco, at the Red Hat booth, 528; (Cisco is at booth 1721). See exhibitor map

Cisco

Cisco ACI offers a consolidated overlay and underlay solution that can be fully automated via OpenStack and the Cisco APIC. This solution scales to over 180,000 virtual machines and thousands of hypervisor hosts without the introduction of centralized bottlenecks or gateways.  It offers deep telemetry and visibility, tying together the OpenStack environment with the physical infrastructure to vastly improve operations and troubleshooting.  The solution also offers an optional, intent-based interface called Group-Based Policy, which leverages ACI’s application-centric policy automation and service chaining capabilities.

Selected differentiators between Red Hat and Cisco vs VMware VIO and NSX:
Red Hat and Cisco VMware VIO and NSX
  • Fully distributed networking solution with no centralized gateways.
  • Simplified automation through Group-Based Policy.
  • NSX required for L4-L7 networking
  • NSX must be deployed into an Edge cluster

Nuage Networks

Utilizing an open plug-in to the Neutron framework of Red Hat OpenStack’s offering, Nuage Networks VSP provides an automated, real-time response to requests relayed from Red Hat Enterprise Linux OpenStack Platform. With the Red Hat and Nuage Networks SDN-based cloud solution, flexible, automated network configuration delivers instantaneous network connectivity so cloud applications can go live faster than with alternative approaches.

Selected differentiators between Red Hat and Nuage vs VMware VIO and NSX:
Red Hat and Nuage VMware VIO and NSX
  • Fully distributed control plane (Nuage Networks VSD) for scale and reliability
  • Federation across multiple clouds, including public clouds
  • Network templates free application developers from having to deal with network settings
  • Declarative policies are intelligently interpreted at each network and end point – across clouds, datacenters, hypervisors, and bare metal servers.
  • Constrained by VMware clusters
  • Must add more VMs manually to add more clusters
  • Clusters also control CPU/memory resources that VMs receive
  • Provides networking within one datacenter
  • Status quo – application developers must understand and configure network settings
  • Declarative policies are applied within VMware hypervisor only.

Juniper

Juniper and Red Hat have collaborated to deliver a validated solution and collaborative support model based on Contrail Cloud Platform (based on Open Contrail) plus Red Hat Enterprise Linux OpenStack Platform for enterprise and provider cloud deployments.

Selected differentiators between Red Hat and Juniper vs VMware VIO and NSX
Red Hat and Juniper VMware and NSX
Open source (OpenContrail), open standards (IP-VPN), and open interfaces (REST API’s) into system ensure transparency, interoperability with multi-vendor physical networks and investment protection Lock in to VMware software stack
Simple policy definition and group-based policy enforcement automates network service insertion improves business agility Automation requires vRealize products

Midokura

Midokura Enterprise MidoNet provides fully distributed and advanced L2 to L4 network services. Leveraging solid open source technologies like Apache Zookeeper and Cassandra, MidoNet brings flow processing to the edge of the network and improves performance inside the virtual network. Like most overlays, traffic is encapsulated and sent over the physical network between hosts. In MidoNet, the flow processing can be done at line speed because the MidoNet agent has knowledge of the virtual topology without going off-box to a central controller.

Selected differentiators between Red Hat and Midokura vs VMware VIO and NSX
Red Hat and Midokura VMware and NSX
Massive horizontal scale on distributed layer 3 gateways; scaling in MidoNet is simple, simply add nodes to scale; multi-data center support is through top-of-rack switches running the MidoNet agent Constrained by VMware technology to a single datacenter of modest size. No federation across cloud capabilities.
Can trace live and past flows and provide visibility into virtual network Limited to live flows

PLUMgrid

PLUMgrid Open Networking Suite is a leading cloud networking solution for Red Hat Enterprise Linux OpenStack Platform. PLUMgrid helps overcome the limitations of many other OpenStack networking solutions, by providing a rich set of high performance virtual network functions, end-to-end encryption, high availability features plus automated installation, management, analytics and operational tools.

Selected differentiators between Red Hat and PLUMgrid vs VMware VIO and NSX
Red Hat and PLUMgrid VMware VIO and NSX
  • PLUMgrid ONS is built on concept of Virtual Domains for micro-segmentation.
  • Fully distributed in-kernel portfolio of network and security functions

Vertically integrated single vendor solution

The value of a Red Hat certified solution means that customers get performance and reliability when choosing their vendors for their solution. Red Hat and its certified vendors work together to solve customer problems and provide best of breed solutions. Red Hat maintains a large ecosystem of certified hardware and software vendors across all products, specifically for OpenStack where there are more than 900  certified products.

Scaling NFV to 213 Million Packets per Second with Red Hat Enterprise Linux, OpenStack, and DPDK

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — August 19, 2015

Written by: Andrew Theurer, Principle Software Engineer

There is a lot of talk about NFV and OpenStack, but frankly not much hard data, showing us how well OpenStack can perform with technologies like DPDK. We at Red Hat want to know, and I suspect many of you do as well. So, we decided to see what RDO Kilo is capable of, by testing multiple Virtual Network Functions (VNFs), deployed and managed completely by OpenStack.

Creating the ultimate NFV compute node

In order to scale NFV performance to incredible levels, we need to start with a strong foundation -the hardware which makes up the compute nodes. A NFV compute node needs incredible I/O capability and very fast memory. We selected a server with 2 Intel Haswell-EP processors, 24 cores, 64GB memory @2133 MHz, and seven available PCI gen3 slots. We populated six of these PCI slots with Intel dual-port 40Gb adapters -that’s twelve 40Gb ports in one server!

Exploiting high performance hardware with Nova

The compute node we choose has the potential for amazing NFV performance, but only if it is configured properly. If you were not using OpenStack to deploy virtual machines, you need to ensure your deployment process properly chooses resources correctly -from node-local CPU, memory and I/O, to backing VM memory with 1GB pages. All of these are essential to getting top performance from your VMs. The good news is that OpenStack can do this for you. No longer are you required to get this “right”. The user only needs to prepare for PCI passthrough and then specify the resources via Nova flavor-key:

nova flavor-key pci-pass-40Gb set “hw:mem_page_size=1048576”

nova flavor-key pci-pass-40Gb set “pci_passthrough:alias”=”XL710-40Gb-PF:2”

When creating a new instance with this flavor, Nova will then ensure that the resources are node-local and the VM is backed with 1GB huge pages.

The Network Function under test

We deployed six VMs, using RHEL 7.1 and DPDK 2.0, each of them performing a basic VNF role: forwarding of layer-2 packets. DPDK (data plane development kit) is a set of libraries and drivers for incredibly fast packet processing. More information on DPDK is available here. Each VM includes of 2 x 40Gb interfaces, 3 vCPUs, and 6GB of memory. Forwarding of network packets was enabled for both ports (in one port, out the other), in both directions. You can think of this network function as a bridge or the base function of a firewall, to be located somewhere between your computer and a destination:

basic-network-function

In this scenario, the “processing” we choose is packet forwarding, handled by the application, “testpmd”, which is included in the DPDK software. We choose this because we wanted to test the I/O throughput at the highest possible levels to confirm whether OpenStack Nova made the correct decisions regarding resource allocation. Once these VMs are provisioned, we have a compute node with:

NFV-node

We use a second system to generate network traffic, which happens to have identical hardware configuration as the compute node. This system acts as both the “computer/phone/device” and the “server” in our test scenario. For each VM, the packet generator generates traffic, sending to both of the VM’s ports, and the packet generator also receives traffic that the VM forwards. For our test metric, we count how many packets per second are transmitted, forwarded by the VM and finally returned to the packet generator system.

NFV-test-bed

The test results

Note that we conduct this test with all six VMs processing packets at the same time. We used a packet size of 64 bytes in order to simulate the worst possible conditions for packet processing overhead. This allows us to drive to highest levels of packets-per-second without prematurely hitting a bandwidth limit. In this scenario, we are able to achieve 213 Million packets per second! Openstack and DPDK is operating at nearly the theoretical maximum packet rate for these network adapters! In fact, when we tested these two systems without Openstack or any virtualization, we observed 218 Million packets per second. Openstack with KVM is achieving 97.7% of bare-metal!

One other important aspect to consider is how much CPU are we using for this test. Is there enough to spare for more advanced network functions? Could we scale to more network functions? Below is a graph of CPU usage as observed from the compute node:

cpuall_cpuall

Although processing 213 Million packets per second is an incredible feat, this compute node still has ½ the system’s CPU unused! Each of the VMs are using 2 out of 3 vCPUs to perform packet forwarding, leaving 1 vCPU for more advanced packet processing. These VMs could also be provisioned with 4 vCPUs without over-committing host CPUs, providing even more compute resource to them.

Real results, and more to come

We will continue reporting performance tests like this, showing actual performance of NFV and OpenStack that we achieve in our tests. We are also working with groups like OPNFV to help standardize benchmarks like this, so stay tuned. We have a lot more to share!

Performance and Scaling your Red Hat Enterprise Linux OpenStack Platform Cloud

by Joe Talerico - Senior Performance Engineer — August 17, 2015
and Roger Lopez - Principal Software Engineer

As OpenStack continues to grow into a mainstream Infrastructure-as-a-service (IaaS) platform, the industry seeks to learn more about its performance and scalability for use in production environments. As recently captured in this blog, common questions that typically arise are: “Is my hardware vendor working with my software vendor?”, “How much hardware would I actually need?”, and “What are the best practices for scaling out my OpenStack environment?”  

These common questions are often difficult to answer because they rely on environment specifics. With every environment being different, often composed of products from multiple vendors, how does one go about finding answers to these generic questions?

To aid in this process, Red Hat Engineering has developed a reference architecture capturing  Guidelines and Considerations for Performance and Scaling of Red Hat Enterprise Linux OpenStack Platform 6-based cloud. The reference architecture utilizes common benchmarks to generate a load on a RHEL OpenStack Platform environment to answer these exact questions.

Where do I start?

With the vast amount of features that OpenStack provides, it also brings a lot of complexities to the table. The first place to start is not by trying to find performance & scaling results on an already running OpenStack environment, but to step back and take a look at the underlying hardware that is in-place to potentially run this OpenStack environment. This allows one to answer the questions “How much hardware do I need?” and “Is my hardware working as intended?” all while avoiding the complexities that can affect performance such as file systems, software configurations, and changes in the OS. A tool to answer these questions is the Automatic Health Check (AHC). AHC is a framework developed by eNovance to capture, measure and report a system’s overall performance by stress testing its CPU, memory, storage, and network. AHC’s main objective is to provide an estimation of a server’s capabilities and ensure its basic subsystems are running as intended. AHC uses tools such as sysbench, fio, and netperf and provides a series of benchmark tests that are fully automated to provide consistent results across multiple test runs. The test results are then captured and stored at a specified central location. AHC is useful when doing an initial evaluation of a potential OpenStack environments as well as post-deployment.  If a specific server causes problems, the same AHC non-destructive benchmark tests can be run on that server and the outcome could be compared with the initial results captured prior to deploying OpenStack. AHC is publically available open source project on GitHub via https://github.com/enovance/edeploy.

My hardware is optimal and ready, what’s next?

Deploy OpenStack! Once it is determined that the underlying hardware meets the specified requirements to drive an OpenStack environment, the next step is to go off and deploy OpenStack. While the installation of OpenStack itself can be complex,one of the keys to providing performance and scalability of the entire environment is to isolate network traffic to a specific NIC  for maximum bandwidth. The more NICs available within a system, the better. If you have questions on how to deploy RHEL OpenStack Platform 6, please take a look at Deploying Highly Available Red Hat Enterprise Linux OpenStack Platform 6 with Ceph Storage reference architecture.

Hardware optimal? Check. OpenStack installed? Check.

With hardware running optimally and OpenStack deployed, the focus turns towards validating the OpenStack environment using the open source tool Tempest.

Tempest is the tool of choice for this task as it contains a list of design principles for validating the OpenStack cloud by explicitly testing a number of scenarios to determine whether the OpenStack cloud is running as intended. The specifics on setting up Tempest can be found in this reference architecture.

Upon validating the OpenStack environment, the focus shifts to answering scalability and performance questions.  The two benchmarking tools used to do that are Rally and

Cloudbench (cbtool). Rally offers an assortment of actions to stress any OpenStack installation and the aforementioned  reference architecture has the details on how to use the benchmarking tools to test specific scenarios.

Cloudbench, cbtool, is a framework that automates IaaS cloud benchmarking by running a series of controlled experiments. An experiment is executed by the virtue of deploying and running a set of Virtual Applications (VApps). Within  our reference architecture, the workload VApp consists of two critical roles used for benchmarking, the orchestrator role and workload role.

Rally and CloudBench complement each other by providing the ability to benchmark different aspects of the OpenStack cloud thus offering different views on what to expect once the OpenStack cloud goes into production.

Conclusion

To recap, when trying to determine the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform installation make sure to follow these simple steps:

  1. Validate the underlying hardware performance using AHC
  2. Deploy Red Hat Enterprise Linux OpenStack Platform
  3. Validate the newly deployed infrastructure using Tempest
  4. Run Rally with specific scenarios that stress the control plane of OpenStack environment
  5. Run CloudBench (cbtool) experiments that stress applications running in virtual machines within OpenStack environment

In our next blog, we will take a look at specific Rally scenario and discuss how tweaking the OpenStack environment based upon Rally results  could allow us to achieve better performance. Stay tuned and check out our blog site often!

 

Upgrades are dying, don’t die with them

by Maxime Payant-Chartier, Technical Product Manager, Red Hat — August 12, 2015

We live in a world that has changed the way it consumes applications. The last few years have seen a rapid rise in the adoption of Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). Much of this can be attributed to the broad success of Amazon Web Services (AWS), which is said to have grown revenue from $3.1B to $5B last year (Forbes). More and more people, enterprise customers included, are consuming applications and resources that require little to no maintenance. And any maintenance that does happen, now goes unnoticed by users. This leaves traditional software vendors contending to find a way to adapt their distribution models to make their software easier to consume. Lengthy, painful upgrades are no longer acceptable to users, forcing vendors to create a solution to this problem.

Let’s face it, the impact of this on traditional software companies are starting to be felt. Their services and methods of doing business are now being compared to a newer, more efficient model. One that is not bogged down by the inefficiencies of the traditional model. SaaS has the advantage that the software runs in their datacenters, where they have easy access to it, control the hardware, the architecture, the configurations, and so on.

Open source initiatives that target the enterprise market, like OpenStack, have to look at what others are doing in order to appeal to it’s intended audience. The grueling release cycle of the OpenStack community (major releases every 6 months) can put undue pressure on enterprise IT teams to update, deploy, and maintain environments, often times leaving them unable to keep up from one release to the next. Inevitably, they start falling behind. And in some cases, their attempts to update is slower than the software release cycle, resulting in them falling further behind each release. This is a major hindrance to successful OpenStack adoption.

Solving only one side of the problem

Looking at today’s best practices for upgrading, we can see that the technology hasn’t quite matured yet. And, although DevOps allows companies to deliver code to customers faster, It doesn’t solve the problem of installing the new underlying infrastructure – faster is not enough.. This situation is even more critical when considering your data security practices. The ability to patch quickly and efficiently is key for companies to deploy security updates when critical security issues are spotted.

Adding to this further, is how businesses can shorten the feedback loop with development releases. Releasing an alpha or beta, waiting for people to test it and send relevant feedback is a long process that causes delays for both the customer and the provider. Yet another friction point.

Efforts are currently being made with community projects Tempest and Rally to provide better vision in a cloud’s stability and functionality. These two projects on their own are necessary steps in the right direction, however they currently lack holistic integration and still only offer a vision into a single cloud’s performance. Additionally, they do not yet allow for an OpenStack distribution provider to check if their distribution’s new versions work with specific configurations or hardware. Whatever the solution is, it has to compete with what is currently being offered in the “*aaS” or it will be seen as outdated and risk losing users.

Automation: A way out

Continuous integration and continuous delivery (CI/CD) is all the rage these days and it might offer part of the solution. Automation has to play a key role if companies are to keep up. We need to look into ways of making the process repeatable, reliable, incrementally improving, and customizable. Developers can no longer claim it worked on their laptop, so companies cannot limit themselves to saying it worked (or didn’t work) on their infrastructure. Software providers have to get closer to their customers to share in the pain.

Every OpenStack deployment is a custom job these days. Everyone is not running the same hardware, the same configurations, and so on. This means we have to adapt to those customizations and provide a framework that allows people to test their specific use cases. Once unit testing, integration testing, and functional testing has happened inside the walls of the software providers, it has to go out into the wild and survive real customer use cases. And just as important, feedback has to be received quickly in order for the next iterations to be smaller, which will ease the burden of identifying problems and fixing as needed.

One of the concepts Red Hat is investigating is chaining different CI environments and managing the logs and log analysis from a “central CI”. We’ve been working with customers to validate this concept by testing it first on customer and partner equipment for those who have been able to set aside equipment for us. We want to deploy a new version and verify an update live on premise and include this step into our gating process before merging code. We are not satisfied  unless it can be deployed and proven to work in a real environment. This means that CI/CD isn’t just about us anymore, it has to work on-site or a patch is not merged.

Currently in our testing, we receive status reports from different architectures which allows us to identify if an issue is specific to a certain configuration, hardware, or environment. This also allows us identify a more widespread issue that needs to be fixed in the release. Ideally, we envision a point where once a new version reaches a certain “acceptance threshold,” it is marked as ready for release. It’s then automatically pushed out and updated to a customer’s pre-production environment.

A workflow might look something like this:

Screen Shot 2015-08-12 at 11.44.53 AM

Source (modified): https://en.wikipedia.org/wiki/Continuous_delivery#/media/File:Continuous_Delivery_process_diagram.png

This type of workflow could integrate well into existing tools like Red Hat Satellite. Updates would still be provided as usual, but additional options to test upgrades leveraging the capabilities of the cloud would be made available. This would provide system administrators with an added level of certainty before deploying packages to existing servers, including logs to troubleshoot before pushing to production environments, should anything go wrong.  

Red Hat is committed to delivering a better and smoother upgrade experience for our customers and partners. While there are many questions that remain to be answered, notably around security or proprietary code, there is no doubt in my mind that this is the way forward for software. Automation has to take over the busy work of testing and upgrading to free up critical IT staff members to spend more time delivering features to their customers or users.

How to choose the best-fit hardware for your OpenStack deployment

by Jonathan Gershater — August 6, 2015

One of the benefits of OpenStack is the ability to deploy the software on standard x86 hardware, and thus not be locked-in to custom architectures and high prices from specialized vendors.

Before you select your x86 hardware, you might want to consider how you will resolve hardware/software related issues:

  • Is my distribution of OpenStack and the underlying Linux, certified to run on the hardware I use?
  • Will the vendor of my OpenStack distribution work with my hardware vendor to resolve issues?

There was a panel session  (Cisco, Ooyala, Sprint, and Shutterfly) on OpenStack use cases at the OpenStack Summit in Vancouver, May 2015. At the end, an audience member asked “How important is it that the OpenStack distribution is  certified to run on the hardware you use?

To listen to the panelists’ answer, (less than two minutes), click here:

https://www.youtube.com/watch?v=AwCq9r9cExM&feature=youtu.be&t=2594

From the video

Cisco’s Director of Engineering and Operations, Rafi Khardalian:

  1. “OpenStack is a sliver of a large stack of software you are running.”
  2. “The Linux kernel is a key component and it is vitally critical to test it against the hardware you are running, so that the Linux kernel is reliable and can offer all the features you need consumed up the stack into OpenStack….. Example
    1. How reliable is the VXLAN?
    2. We found better reliability with Intel cards vs Broadcom cards…. it is the maturity of the driver set.”

And Ilan Rabinovich from Ooyala:

  • “Some of the more painful experiences we experienced…that piece of hardware and that driver are not making friends…its where you spend the most time troubleshooting.”

Next Steps to Consider

Red Hat maintains a large ecosystem of certified hardware and software vendors across all products. Specifically for OpenStack there are more than 900 certified products.  Together with our certified hardware vendors, and over 20 years of Linux experience, Red Hat is well equipped to resolve issues across the entire stack: Hardware, Linux,  KVM hypervisor and OpenStack.

Voting Open for OpenStack Summit Tokyo Submissions: Container deployment, management, security and operations – oh my!

by Steve Gordon, Product Manager, Red Hat — July 29, 2015

This week we have been providing a preview of Red Hat submissions for the upcoming OpenStack Summit to be held October 27-30, in Tokyo, Japan. Today’s grab bag of submissions focus on containers the relationship between them and OpenStack as well as how to deploy, manage, secure, and operate workloads using them. This was already a hotbed of new ideas and discussion at the last summit in Vancouver and we expect things will only continue to heat up in this area as a result of recent announcements in the lead up to Tokyo!

The OpenStack Foundation manages allows its members to vote the topics and presentations they would like to see as part of the selection process. To vote for one of the listed sessions, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

Application & infrastructure continuous delivery using OpenShift and OpenStack
  • Mike McGrath – Senior Principal Architect, Atomic @ Red Hat
Atomic Enterprise on OpenStack
  • Jonathon Jozwiak – Principal Software Engineer @ Red Hat
Containers versus Virtualization: The New Cold War?
  • Jeremy Eder – Principal Performance Engineer @ Red Hat
Container security: Do containers actually contain? Should you care?
  • Dan Walsh – Senior Principal Software Engineer @ Red Hat
Container Security at Scale
  • Scott McCarty – Product Manager, Container Strategy @ Red Hat
Containers, Kubernetes, and GlusterFS, a match made in Tengoku
  • Luis Pabón – Principal Software Engineer @ Red Hat
  • Stephen Watt – Chief Architect, Big Data @ Red Hat
  • Jeff Vance – Principal Software Engineer @ Red Hat
Converged Storage in hybrid VM and Container deployments using Docker, Kubernetes, Atomic and OpenShift
  • Stephen Watt – Chief Architect, Big Data @ Red Hat
Deploying and Managing OpenShift on OpenStack with Ansible and Heat
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat
  • Greg DeKoenigsberg –  Vice President, Community @ Ansible
  • Veer Michandi – Senior Solution Architect @ Red Hat
  • Ken Thompson – Senior Cloud Solution Architect @ Red Hat
  • Tomas Sedovic – Senior Software Engineer @ Red Hat
Deploying containerized applications across the Open Hybrid Cloud using Docker and the Nulecule spec
  • Tushar Katarki – Integration Architect @ Red Hat
  • Aaron Weitekamp – Senior Software Engineer @ Red Hat
Deploying Docker and Kubernetes with Heat and Atomic
  • Steve Gordon – Senior Technical Product Manager, OpenStack @ Red Hat
Develop, Deploy, and Manage Applications at Scale on an OpenStack based private cloud
  • James Labocki – Product Owner, CloudForms @ Red Hat
  • Brett Thurber – Principal Software Engineer @ Red Hat
  • Scott Collier – Senior Principal Software Engineer @ Red Hat
How to Train Your Admin
  • Aleksandr Brezhnev – Senior Principal Solution Architect @ Red Hat
  • Patrick Rutledge – Principal Solution Architect @ Red Hat
Minimizing or eliminating service outages via robust application life-cycle management with container technologies
  • Tushar Katarki – Integration Architect @ Red Hat
  • Aaron Weitekamp – Senior Software Engineer @ Red Hat
OpenStack and Containers Advanced Management
  • Federico Simoncelli – Principal Software Engineer @ Red Hat
OpenStack & The Future of the Containerized OS
  • Daniel Riek – Senior Director, Systems Design & Engineering @ Red Hat
Operating Enterprise Applications in Docker Containers with Kubernetes and Atomic Enterprise
  • Mike McGrath – Senior Principal Architect, Atomic @ Red Hat
Present & Future-proofing your datacenter with SDS & OpenStack Manila
  • Luis Pabón – Principal Software Engineer @ Red Hat
  • Sean Murphy – Product Manager, Red Hat Storage @ Red Hat
  • Sean Cohen – Principal Product Manager, OpenStack @ Red Hat
Scale or Fail – Scaling applications with Docker, Kubernetes, OpenShift, and OpenStack
  • Grant Shipley – Senior Manager @ Red Hat
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat

Thanks for taking the time to help shape the next OpenStack summit!

Voting Open for OpenStack Summit Tokyo Submissions: Deployment, management and metering/monitoring

by Keith Basil, Principal Product Manager, Red Hat — July 28, 2015

Another cycle, another OpenStack Summit, this time on October 27-30 in Tokyo. The Summit is the best opportunity for the community to gather and share knowledge, stories and strategies to move OpenStack forward. With more than 200 breakout sessions, hands-on workshops, collaborative design sessions, tons of opportunity for networking and perhaps even some sightseeing, the Summit is the even everyone working or planning to work with OpenStack should attend.

Critical subjects, awesome sessions

To select those 200+ sessions the community proposes talks that are selected by your vote, and we would like to showcase our proposed sessions about some of the most critical subjects of an OpenStack cloud: deployment, management and metering/monitoring.

There are multiple ways to deploy, manage and monitor clouds, but we would like to present our contributions to the topic, sharing both code and vision to tackle this subject now and in the future. With sessions about TripleO, Heat, Ironic, Puppet, Ceilometer, Gnocchi and troubleshooting, we’ll cover the whole lifecycle of OpenStack, from planning a deployment, to actually executing and then monitoring and maintaining it on the long term. Click on the links below to read the abstracts and vote your the topics you want to see in Tokyo.

Deployment and Management

OpenStack on OpenStack (TripleO): First They Ignore You..
  • Dan Sneddon – Principal OpenStack Engineer @ Red Hat
  • Keith Basil – Principal Product Manager, OpenStack Platform @ Red Hat
  • Dan Prince – Principal Software Engineer @ Red Hat
Installers are dead, deploying our bits is a continuous process
  • Nick Barcet – Director of OpenStack Product Management @ Red Hat
  • Keith Basil – Principal Product Manager, OpenStack Platform @ Red Hat
TripleO: Beyond the Basic Openstack Deployment
  • Steven Hillman – Software Engineer @ Cisco Systems
  • Shiva Prasad Rao – Software Engineer @ Cisco Systems
  • Sourabh Patwardhan – Technical Leader @ Cisco Systems
  • Saksham Varma – Software Engineer @ Cisco Systems
  • Jason Dobies – Principal Software Engineer @ Red Hat
  • Mike Burns – Senior Software Engineer @ Red Hat
  • Mike Orazi – Manager, Software Engineering @ Red Hat
  • John Trowbridge – Software Engineer, Red Hat @ Red Hat
Troubleshoot Your Next Open Source Deployment
  • Lysander David – IT Infrastructure Architect @ Symantec
Advantages and Challenges of Deploying OpenStack with Puppet
  • Colleen Murphy – Cloud Software Engineer @ HP
  • Emilien Macchi – Senior Software Engineer @ Red Hat
Cloud Automation: Deploying and Managing OpenStack with Heat
  • Snehangshu Karmakar – Cloud Curriculum Manager @ Red Hat
Hands-on lab: Deploying Red Hat Enterprise Linux OpenStack Platform
  • Adolfo Vazquez – Curriculum Manager @ Red Hat
TripleO and Heat for Operators: Bringing the values of Openstack to Openstack Management
  • Graeme Gillies – Principal Systems Administrator @ Red Hat
The omniscient cloud: How to know all the things with bare-metal inspection for Ironic
  • Dmitry Tantsur – Software Engineer @ Red Hat
  • John Trowbridge – Software Engineer @ Red Hat
Troubleshooting A Highly Available Openstack Deployment.
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Tuning HA OpenStack Deployments to Maximize Hardware Capabilities
  • Vinny Valdez – Sr. Principal Cloud Architect @ Red Hat
  • Ryan O’Hara – Principal Software Engineer @ Red Hat
  • Dan Radez – Sr. Software Engineer @ Red Hat
OpenStack for Architects
  • Michael Solberg – Chief Field Architect @ Red Hat
  • Brent Holden – Chief Field Architect @ Red Hat
A Day in the Life of an Openstack & Cloud Architect
  • Vijay Chebolu – Practice Lead @ Red Hat
  • Vinny Valdez – Sr. Principal Cloud Architect @ Red Hat
Cinder Always On! Reliability and scalability – Liberty and beyond
  • Michał Dulko – Software Engineer @ Intel
  • Szymon Wróblewski – Software Engineer @ Intel
  • Gorka Eguileor – Senior Software Engineer @ Red Hat

Metering and Monitoring

Storing metrics at scale with Gnocchi, triggering with Aodh
  • Julien Danjou – Principal Software Engineer @ Red Hat

Voting Open for OpenStack Summit Tokyo Submissions: Storage Spotlight

by Sean Cohen, Principal Technical Product Manager, Red Hat —

The OpenStack Summit will take place on October 27-30 in Tokyo, will be a five-day conference for OpenStack contributors, enterprise users, service providers, application developers and ecosystem members.  Attendees can expect visionary keynote speakers, 200+ breakout sessions, hands-on workshops, collaborative design sessions and lots of networking. In keeping with the Open-Source spirit, you are in the front seat to cast your vote for the sessions that are important to you!

Today we will take a peak at some recommended storage related session proposals for the Tokyo summit, be sure to vote for your favorites! To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

Block Storage

OpenStack Storage State of the Union
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Flavio Percoco, Senior Software Engineer @ Red Hat
  • Jon Bernard ,Senior Software Engineer @ Red Hat
Ceph and OpenStack: current integration and roadmap
  • Josh Durgin, Senior Software Engineer @ Red Hat
  • Sébastien Han, Senior Cloud Architect @ Red Hat
State of Multi-Site Storage in OpenStack
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Neil Levine, Director of Product Management @ Red Hat
  • Sébastien Han, Senior Cloud Architect @ Red Hat
Block Storage Replication with Cinder
  • John Griffith, Principal Software Engineer @ SolidFire
  • Ed Balduf, Cloud Architect @ SolidFire
Sleep Easy with Automated Cinder Volume Backup
  • Lin Yang, Senior Software Engineer @ Intel
  • Lisa Li Software, Engineer @ Intel
  • Yuting Wu, Engineer @ Awcloud
Flash Storage and Faster Networking Accelerate Ceph Performance
  • John Kim, Director of Storage Marketing @ Mellanox Technologies
  • Ross Turk, Director of Product Marketing @ Red Hat Storage

File Storage

Manila – An Update from Liberty
  • Sean Cohen, Principal Product Manager @ Red Hat
  • Akshai Parthasarathy Technical Marketing Engineer @ NetApp
  • Thomas Bechtold, OpenStack Cloud Engineer @ SUSE
Manila and Sahara: Crossing the Desert to the Big Data Oasis
  • Ethan Gafford, Senior Software Engineer @ Red Hat
  • Jeff Applewhite, Technical Marketing Engineer, NetApp
  • Weiting Chen, Software Engineer @ Intel
GlusterFS making things awesome for Swift, Sahara, and Manila.
  • Luis Pabón, Principal Software Engineer @Red Hat
  • Thiago da Silva, Senior Software Engineer @ Red Hat
  • Trevor McKay, Senior Software Engineer @ Red Hat

Object Storage

Benchmarking OpenStack Swift
  • Thiago da Silva, Senior Software Engineer @ Red Hat
  • Christian Schwede, Principal Software Engineer @ Red Hat
Truly durable backups with OpenStack Swift
  • Christian Schwede, Principal Software Engineer @ Red Hat
Encrypting Data at Rest: Let’s Explore the Missing Piece of the Puzzle
  • Dave McCowan, Technical Leader, OpenStack @ Cisco
  • Arvind Tiwari, Technical Leader @ Cisco

 

DevOps, Continuous Integration, and Continuous Delivery

by Maxime Payant-Chartier, Technical Product Manager, Red Hat —

As we all turn our eyes towards Tokyo for the next OpenStack Summit edition the time has come to make your voice heard as to which talks you would like to attend while you are there. Remember, even if you are not attending the live event many sessions get recorded and can be viewed later so make your voice heard and influence the content!

Let me suggest a couple talks under the theme of DevOps, Continuous Integration, and Continuous Delivery – remember to vote for your favorites by midnight Pacific Standard Time on July 30th and we will see you in Tokyo!

Continuous Integration is an important topic, we can see this through   the amount of effort  deployed by the OpenStack CI team. OpenStack deployments all over the globe cover a wide range of possibilities (NFV, Hosting, extra services, advanced data storage, etc). Most of them come with their technical specificities including hardware, uncommon configuration, network devices, etc.

This make these OpenStack installation unique and hard to test. If we want to properly make them fit in the CI process, we need new methodology and tooling.

Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization.

In this talk we’ll give you an overview of a platform, called Software Factory, that we develop and use at Red Hat. It is an open source platform that is inspired by the OpenStack’s development’s workflow and embeds, among other tools, Gerrit, Zuul, and Jenkins. The platform can be easily installed on an OpenStack cloud thanks to Heat and can rely on OpenStack to perform CI/CD of your applications.

One of the best success stories to come out of OpenStack is the Infrastructure project. It encompasses all of the systems used in the day-to-day operation of the OpenStack project as a whole. More and more other projects and companies are seeing the value of the OpenStack git workflow model and are now running their own versions of OpenStack continuous integration (CI) infrastructure. In this session, you’ll learn the benefits of running your own CI project, how to accomplish it, and best practices for staying abreast of upstream changes.

In order to provide better quality while keeping up on the growing number of projects and features lead Red Hat to adapt it’s processes.  Moving from a 3 team process (Product Management, Engineering and QA) to a feature team approach each embedding all the actors of the delivery process was one of the approach we took and which we are progressively spreading.

We delivered a very large number of components that needs to be engineered together to deliver their full value, and which require delicate assembly as they work together as a distributed system. How can we do this is in in a time box without giving up on quality?

Learn how to get a Vagrant environment running as quickly as possible, so that you can start iterating on your project right away.

I’ll show you an upstream project called Oh-My-Vagrant that does the work and adds all the tweaks to glue different Vagrant providers together perfectly.

This talk will include live demos of building docker containers, orchestrating them with kubernetes, adding in some puppet, and all glued together with vagrant and oh-my-vagrant. Getting familiar with these technologies will help when you’re automating Openstack clusters.

In the age of service, core builds become a product in the software supply chain. Core builds shift from a highly customized stack which meets ISV software requirements to an image which provides a set of features. IT Organization shift to become product driven organizations.

This talk will dive into the necessary organizational changes and tool changes to provide a core build in the age of service and service contracts.

http://crunchtools.com/files/2015/07/Core-Builds-in-the-Age-of-Service.pdf

http://crunchtools.com/core-builds-service/

We will start with a really brief introduction of Openstack services we will use to build our app. We’ll cover all of the different ways you can control an OpenStack cloud: a web user interface, the command line interface, a software development kit (SDK), and the application programming interface (API).

After this brief introduction on the tools we are going to use in our hands on lab we’ll get our hands dirty and build a application that will make use of an OpenStack cloud.

This application will utilize a number of OpenStack services via an SDK to get its work done. The app will demonstrate how OpenStack services can be used as base to create a working application.

Voting Open for OpenStack Summit Tokyo Submissions: Networking, Telco, and NFV

by Nir Yechiel

The next OpenStack Summit is just around the corner, October 27-30, in Tokyo, Japan, and we would like your help shaping the agenda. The OpenStack Foundation manages voting by allowing its members to choose the topics and presentations they would like to see.

Virtual networking and software-defined networking (SDN) has become an increasingly exciting topic in recent years, and a great focus for us at Red Hat. It also lays the foundation for network functions virtualization (NFV) and the recent innovation in the telecommunication service providers space.

Here you can find networking and NFV related session proposals from Red Hat and our partners. To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.

OpenStack Networking (Neutron)

OpenStack Networking (Neutron) 101
  • Nir Yechiel – Senior Technical Product Manager @ Red Hat
Almost everything you need to know about provider networks
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Why does the lion’s share of time and effort goes to troubleshooting Neutron?
  • Sadique Puthen – Principal Technical Support Engineer @ Red Hat
Neutron Deep Dive – Hands On Lab
  • Rhys Oxenham – Principal Product Manager @ Red Hat
  • Vinny Valdez – Senior Principal Cloud Architect @ Red Hat
L3 HA, DVR, L2 Population… Oh My!
  • Assaf Muller – Senior Software Engineer @ Red Hat
  • Nir Yechiel – Senior Technical Product Manager @ Red Hat
QoS – a Neutron n00bie
  • Livnat Peer – Senior Engineering Manager @ Red Hat
  • Moshe Levi – Senior Software Engineer @ Mellanox
  • Irena Berezovsky – Senior Architect @ Midokura
Clusters, Routers, Agents and Networks: High Availability in Neutron
  • Florian Haas – Principal Consultant @ hastexo!
  • Livnat Peer – Senior Engineering Manager @ Red Hat
  • Adam Spiers – Senior Software Engineer @ SUSE

Deploying networking (TripleO)

TripleO Network Architecture Deep-Dive and What’s New
  • Dan Sneddon – Principal OpenStack Engineer @ Red Hat

Telco and NFV

Telco OpenStack Cloud Deployment with Red Hat and Big Switch
  • Paul Lancaster – Strategic Partner Development Manager @ Red Hat
  • Prashant Gandhi – VP Products & Strategy @ Big Switch
OpenStack NFV Cloud Edge Computing for One Cloud
  • Hyde Sugiyama – Senior Principal Technologist @ Red Hat
  • Timo Jokiaho – Senior Principal Technologist @ Red Hat
  • Zhang Xiao Guang – Cloud Project Manager @ China Mobile
Rethinking High Availability for Telcos in the new world of Network Functions Virtualization (NFV)
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat

Performance and accelerated data-plane

Adding low latency features in Openstack to address Cloud RAN Challenges
  • Sandro Mazziotta – Director NFV Product Management @ Red Hat
Driving in the fast lane: Enhancing OpenStack Instance Performance
  • Stephen Gordon – Senior Technical Product Manager @ Red Hat
  • Adrian Hoban – Principal Engineer, SDN/NFV Orchestration @ Intel
OpenStack at High Speed! Performance Analysis and Benchmarking
  • Roger Lopez – Principal Software Engineer @ Red Hat
  • Joe Talerico – Senior Performance Engineer @ Red Hat
Accelerate your cloud network with Open vSwitch (OVS) and the Data Plane Development Kit (DPDK)
  • Adrian Hoban – Principal Engineer, SDN/NFV Orchestration @ Intel
  • Seán Mooney  – Network Software Engineer @ Intel
  • Terry Wilson – Senior Software Engineer @ Red Hat

Voting Open for OpenStack Summit Tokyo Submissions: OpenStack for the Enterprise

by Steve Gordon, Product Manager, Red Hat —

In the lead up to OpenStack Summit Hong Kong, the last OpenStack Summit held in the Asia-Pacific region, Radhesh Balakrishnan – General Manager for OpenStack at Red Hat – defined this site as the place to follow us on our journey taking community projects to enterprise products and solutions.

We are excited to now be preparing to head back to the Asia-Pacific region for OpenStack Summit Tokyo – October 27-30 – to share just how far we have come on that journey with host of session proposals focussing on enterprise requirements and the success of OpenStack in this space. The OpenStack Foundation manages voting by allowing its members to choose the topics and presentations they would like to see.

To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.

Vote for your favorites by midnight Pacific Standard Time on July 30th and we will see you in Tokyo!

Is OpenStack ready for the enterprise? Is the enterprise ready for OpenStack?

Can I use OpenStack to build an enterprise cloud?
  • Alessandro Perilli – General Manager, Cloud Management Strategies @ Red Hat
Elephant in the Room: What’s the TCO for an OpenStack cloud?
  • Massimo Ferrari – Director, Cloud Management Strategy @ Red Hat
  • Erich Morisse – Director, Cloud Management Strategy @ Red Hat
The Journey to Enterprise Primetime
  • Arkady Kanevsky – Director of Development @ Dell
  • Das Kamhout – Principal Engineer @ Intel
  • Fabio Di Nitto – Manager, Software Engineering @ Red Hat
  • Nick Barcet – Director of OpenStack Product Management @ Red Hat
Organizing IT to Deliver OpenStack
  • Brent Holden – Chief Cloud Architect @ Red Hat
  • Michael Solberg – Chief Field Architect @ Red Hat
How Customers use OpenStack to deliver Business Applications
  • Matthias Pfützner – Cloud Solution Architect @ Red Hat
Stop thinking traditional infrastructure – Think Cloud! A recipe to build a successful cloud environment
  • Laurent Domb – Cloud Solution Architect @ Red Hat
  • Narendra Narang – Cloud Storage Solution Architect @ Red Hat
Breaking the OpenStack Dream – OpenStack deployments with business goals in mind
  • Laurent Domb – Cloud Solution Architect @ Red Hat
  • Narendra Narang – Cloud Storage Solution Architect @ Red Hat

Enterprise Success Stories

OpenStack for robust and reliable enterprise private cloud: An analysis of current capabilities, gaps, and how they can be addressed.
  • Tushar Katarki – Integration Architect @ Red Hat
  • Rama Nishtala – Architect @ Cisco
  • Nick Gerasimatos – Senior Director of Cloud Services – Engineering @ FICO
  • Das Kamhout – Principal Engineer @ Intel
Verizon’s NFV Learnings
  • Bowen Ross – Global Account Manager @ Red Hat
  • David Harris – Manager, Network Element Evolution Planning @ Verizon
Cloud automation with Red Hat CloudForms: Migrating 1000+ servers from VMWare to OpenStack
  • Lan Chen – Senior Consultant @ Red Hat
  • Bill Helgeson – Principal Domain Architect @ Red Hat
  • Shawn Lower – Enterprise Architect @ Red Hat

Solutions for the Enterprise

RHCI: A comprehensive Solution for Private IaaS Clouds
  • Todd Sanders – Director of Engineering @ Red Hat
  • Jason Rist – Senior Software Engineer @ Red Hat
  • John Matthews – Senior Software Engineer @ Red Hat
  • Tzu-Mainn Chen – Senior Software Engineer @ Red Hat
Cisco UCS Integrated Infrastructure for Red Hat OpenStack
  • Guil Barros – Principal Product Manager, OpenStack @ Red Hat
  • Vish Jakka – Product Manager, UCS Solutions @ Cisco
Cisco UCS & Red Hat OpenStack: Upstream Partnership to Streamline OpenStack
  • Guil Barros – Principal Product Manager, OpenStack @ Red Hat
  • Vish Jakka – Product Manager, UCS Solutions @ Cisco
  • Arek Chylinski – Technologist @ Intel
Deploying and Integrating OpenShift on Dell’s OpenStack Cloud Reference Architecture
  • Judd Maltin – Systems Principal Engineer @ Dell
  • Diane Mueller – Director Community Development, OpenShift @ Red Hat
Scalable and Successful OpenStack Deployments on FlexPod
  • Muhammad Afzal – Architect, Engineering @ Cisco
  • Dave Cain Reference Architect and Technical Marketing Engineer @ NetApp
Simplifying Openstack in the Enterprise with Cisco and Red Hat
  • Karthik Prabhakar – Global Cloud Technologist @ Red Hat
  • Duane DeCapite – Director of Product Management, OpenStack @ Cisco
It’s a team sport: building a hardened enterprise ecosystem
  • Hugo Rivero – Senior Manager, Ecosystem Technology Certification @ Red Hat
Dude, this isn’t where I parked my instance!?
  • Steve Gordon – Senior Technical Product Manager, OpenStack @ Red Hat
Libguestfs: the ultimate disk-image multi-tool
  • Luigi Toscano – Senior Quality Engineer @ Red Hat
  • Pino Toscano – Software Engineer @ Red Hat
Which Third party OpenStack Solutions should I use in my Cloud?
  • Rohan Kande – Senior Software Engineer @ Red Hat
  • Anshul Behl – Associate Quality Engineer @ Red Hat

Securing OpenStack for the Enterprise

Everything You Need to Know to Secure an OpenStack Cloud (but Were Afraid to Ask)
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat
  • Ted Brunell – Senior Solution Architect @ Red Hat
Towards a more Secure OpenStack Cloud
  • Paul Lancaster – Strategic Partner Development Manager @ Red Hat
  • Malini Bhandaru – Architect & Engineering Manager @ Intel
  • Dan Yocum – Senior Operations Manager, Red Hat
Hands-on lab: configuring Keystone to trust your favorite OpenID Connect Provider.
  • Pedro Navarro Perez – Openstack Specialized Solution Architect @ Red Hat
  • Francesco Vollero – Openstack Specialized Solution Architect @ Red Hat
  • Pablo Sanchez – Openstack Specialized Solution Architect @ Red Hat
Securing OpenStack with Identity Management in Red Hat Enterprise Linux
  • Nathan Kinder – Software Engineering Manager @ Red Hat
Securing your Application Stacks on OpenStack
  • Jonathan Gershater – Senior Principal Product Marketing Manager @ Red Hat
  • Diane Mueller – Director, Community Development for OpenShift @ Red Hat

Celebrating Kubernetes 1.0 and the future of container management on OpenStack

by Steve Gordon, Product Manager, Red Hat — July 24, 2015

This week, together with Google and others we celebrated the launch of Kubernetes 1.0 at OSCON in Portland as well as the launch of the Cloud Native Computing Foundation or CNCF (https://cncf.io/), of which Red Hat, Google, and others are founding members. Kubernetes is an open source system for managing containerized applications providing basic mechanisms for deployment, maintenance, and scaling of applications. The project was originally created by Google and is now developed by a vibrant community of contributors including Red Hat.

As a leading contributor to both Kubernetes and OpenStack it was also recently our great pleasure to welcome Google to the OpenStack Foundation. We look forward to continuing to work with Google and others on combining the container orchestration and management capabilities of Kubernetes with the infrastructure management capabilities of OpenStack.

Red Hat has invested heavily in Kubernetes since joining the project shortly after it was launched in June 2014, and are now the largest corporate contributor of code to the project other than Google itself. The recently announced release of Red Hat’s platform-as-a-service offering, OpenShift v3, is built around Kubernetes as the framework for container orchestration and management.

As a founding member of the OpenStack Foundation we have been working on simplifying the task of deploying and managing container hosts – using Project Atomic –  and configuring a Kubernetes cluster on top of OpenStack infrastructure using the Heat orchestration engine.

To that end Red Hat engineering created the heat-kubernetes orchestration templates to help accelerate research and development into providing deeper integration between Kubernetes and the underlying OpenStack infrastructure. The templates continue to evolve to include coverage for other aspects of container workload management such as auto-scaling and were recently demonstrated at Red Hat summit:

The heat-kubernetes templates were also ultimately leveraged in bootstrapping the OpenStack Magnum project which provides an OpenStack API for provisioning container clusters using underlying orchestration technologies including Kubernetes. The aim of this is to make containers first class citizens within OpenStack just like virtual machines and bare-metal before them, with the ability to share tenant infrastructure resources (e.g. networking and storage) with other OpenStack-managed virtual machines, baremetal hosts, and the containers running on them. Providing this level of integration requires providing or expanding OpenStack implementations of existing Kubernetes plug-in points as well as defining new plug-in APIs where necessary while maintaining the technical independence of the solution. All this must be done while allowing application workloads to remain independent of the underlying infrastructure and allowing for true open hybrid cloud operation. Similarly on the OpenStack side additional work is required so that the infrastructure services are able to support the use cases presented by container-based workloads and remove redundancies between the application workloads and the underlying hardware to optimize performance while still providing for secure operation.

Containers on OpenStack Architecture

Magnum, and the OpenStack Containers Team, provide a focal point to coordinate these research and development efforts across multiple upstream projects as well as other projects within the OpenStack ecosystem itself to achieve the goal of providing a rich container-based experience on OpenStack infrastructure.

As a leading contributor to both OpenStack and Kubernetes we at Red Hat look forward to continuing to work on increased integration with both the OpenStack and Kubernetes communities and our technology partners at Google as these exciting technologies for managing the “data-centers of the future” converge.

Containerize OpenStack with Docker

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — July 16, 2015

Written by: Ryan Hallisey

Today in the cloud space, a lot of buzz in the market stems from Docker and providing support for launching containers on top of an existing platform. However, what is often overlooked is the use of Docker to improve deployment of the infrastructure platforms themselves; in other words, the ability to ship your cloud in containers.


Hallisay-fig1

 

Ian Main and I took hold of a project within the OpenStack community to address this unanswered question: Project Kolla. Being one of the founding members and core developers for the project, I figured we should start by using Kolla’s containers to get this work off the ground. We began by deploying containers one by one in an attempt to get a functioning stack. Unfortunately, not all of Kolla’s containers were in great shape and they were being deployed by Kubernetes. First, we decided to get the containers working, then deal with how they’re managed later. In the short term, we used a bash script to launch our containers, but it got messy as Kubernetes was opening up ports to the host and declaring environment variables for the containers, and we needed to do the same. Eventually, we upgraded the design to use an environment file that was populated by a script, which proved to be more effective. This design was adopted by Kolla and is still being used today[1].

With our setup script intact, we started a hierarchical descent though the OpenStack services, starting with MariaDB, RabbitMQ, and Keystone. Kolla’s containers were in great shape for these three services, and we were able to get them working relatively quickly. Glance was next, and it proved to be quite a challenge. Quickly, we learned that the Glance API container and Keystone were causing one another to fail.

Hallisay-fig1

 

The culprit was that Glance API and Keystone containers were racing to see which could create the admin user first. Oddly enough, these containers worked with Kubernetes, but I then realized Kubernetes restarts containers until they succeed, avoiding the race conditions we were seeing. To get around this, we made Glance and the rest of the services wait for Keystone to be active before they start. Later, we pushed this design into Kolla, and learned that Docker has a restart flag that will force containers to restart if there is an error.[2] We added the restart flag to our design so that containers will be independent of one another.

The most challenging service to containerize was Nova. Nova presented a unique challenge not only because it was made up of the most number of containers, but because it required the use of super privileged containers. We started off using Kolla’s containers, but quickly learned there were many components missing. Most significantly, the Nova Compute and Libvirt containers were not mounting the correct host’s directories, exposing us to one of the biggest hurdles when containerizing Nova: persistent data and making sure instances still exist after you kill the container. In order for that to work, Nova Compute and Libvirt needed to mount /var/lib/nova and /var/lib/libvirt from the host into the container. That way, the data for the instances is stored on the host and not in the container[3].

 

echo Starting nova compute

docker run -d –privileged \

            –restart=always \

            -v /sys/fs/cgroup:/sys/fs/cgroup \

            -v /var/lib/nova:/var/lib/nova \

            -v /var/lib/libvirt:/var/lib/libvirt \

            -v /run:/run \

            -v /etc/libvirt/qemu:/etc/libvirt/qemu \

            –pid=host –net=host \

            –env-file=openstack.env kollaglue/centos-rdo-nova-compute-nova:latest

 

A second issue we encountered when trying to get the Nova Compute container working was that we were using an outdated version of Nova. The Nova Compute container was using Fedora 20 packages, while the other services were using Fedora 21. This was our first taste of having to do an upgrade using containers. To fix the problem, all we had to do was change where Docker pulled the packages from and rebuild the container, effectively a one line change in the Dockerfile:

From Fedora:20

MAINTAINER Kolla Project (https://launchpad.net/kolla)

To

From Fedora:21

MAINTAINER Kolla Project (https://launchpad.net/kolla)

OpenStack services have independent lifecycles making it difficult to perform rolling upgrades and downgrades. Containers can bridge this gap by providing an easy way to handle upgrading and downgrading your stack.

Once we completed our maintenance on the Kolla containers, we turned our focus to TripleO[4]. TripleO is a project in the OpenStack community that aims to install and manage OpenStack. The name TripleO means OpenStack on OpenStack, where it deploys a so called undercloud, and uses that OpenStack setup to deploy an overcloud, also known as the user cloud.

Our goal was to use the undercloud to deploy a containerized overcloud on bare metal. In our design, we chose to deploy our overcloud on top of Red Hat Enterprise Linux Atomic Host[5]. Atomic is a bare bones Red Hat Enterprise Linux-based operating system that is designed to run containers. This was a perfect fit because it’s a bare and simple environment with nice set of tools for launching containers.

 

[heat-admin@t1-oy64mfeu2t3-0-zsjhaciqzvxs-controller-twdtywfbcxgh ~]$ atomic –help

Atomic Management Tool

positional arguments:

{host,info,install,stop,run,uninstall,update}

commands

host                            execute Atomic host commands

info                             display label information about an image

install                          execute container image install method

stop                            execute container image stop method

run                               execute container image run method

uninstall                      execute container image uninstall method

update                        pull latest container image from repository

optional arguments:

-h, –help                  show this help message and exit

 

Next, we had help from Rabi Mishra in creating a Heat hook that would allow Heat to orchestrate container deployment. Since we’re on Red Hat Enterprise Linux Atomic Host, the hook was running in a container and it would start the heat agents; thus allowing for heat to communicate with Docker[6]. Now we had all the pieces we needed.

In order to integrate our container work with TripleO, it was best for us to copy Puppet’s overcloud deployment implementation and apply our work to it. For our environment, we used devtest, the TripleO developer environment, and started to build a new Heat template. One of the biggest differences between using containers and Puppet, was that Puppet required a lot of setup and config to make sure dependencies were resolved and services were being properly configured. We didn’t need any of that. With Puppet, the dependency list looked like[7]:

 

puppetlabs-apache
puppet-ceph

44 packages later…

puppet-openstack_extras
puppet-tuskar

 

With Docker, we were able to replace all of that with:

 

atomic install kollaglue/centos-rdo-<service>

 

We were able to use a majority of the existing environment, but now starting services was significantly simplified.

Unfortunately, we were unable to get results for some time because we struggled to deploy a bare metal Red Hat Enterprise Linux Atomic Host instance. After consulting Lucas Gomes on Red Hat’s Ironic (bare metal deployment service) team, we learned that there was an easier way to accomplish what we were trying to do. He pointed us in the direction of a new feature in Ironic that added support for full image deployment[8]. Although there was a bug in Ironic when using the new feature, we fixed it and started to see our Red Hat Enterprise Linux Atomic Host running. Now that we were past this, we could finally create images and add users, but Nova Compute and Libvirt didn’t work. The problem was that Red Hat Enterprise Linux Atomic Host wasn’t loading the kernel modules for kvm. On top of that, Libvirt needed proper permission to access /dev/kvm and wasn’t getting it.

 

#!/bin/sh

 

chmod 660 /dev/kvm

            chown root:kvm /dev/kvm


echo “Starting libvirtd.”
exec /usr/sbin/libvirtd

 

Upon fixing these issues, we could finally spawn instances. Later, these changes were adopted by Kolla because they represented a unique case that could cause Libvirt to fail[9].

To summarize, we created a containerized OpenStack solution inside of the TripleO installer project, using the containers from the Kolla project. We mirrored the TripleO workflow by using the undercloud (management cloud) to deploy most of the core services in the overcloud (user cloud), but now those services are containerized. The services we used were Keystone, Glance, and Nova; with services like Neutron, Cinder, and Heat soon to follow. Our new solution uses Heat (the orchestration service) to deploy the containerized OpenStack services onto Red Hat Enterprise Linux Atomic Host, and has the ability to plug right into the TripleO-heat-templates. Normally, Puppet is used to deploy an overcloud, but now we’ve proven you can use containers. What’s really unique about this, is that now you can shop for your config in the Docker Registry instead of having to go through Puppet to setup your services. This allows for you to pull down a container where your services come with the configuration you need. Through our work, we have shown that containers are an alternative deployment method within TripleO that can simplify deployment and add choice about how your cloud is installed.

The benefits of using Docker in a regular application are the same as having your cloud run in containers; reliable, portable, and easy life cycle management. With containers, lifecycle management greatly improves TripleO’s existing solution. The upgrading and downgrading process of an OpenStack service becomes far simpler; creating faster turnaround times so that your cloud is always running the latest and greatest. Ultimately, this solution provides an additional method within TripleO to manage the cloud’s upgrades and downgrades, supplementing the solution TripleO currently offers.

Overall, integrating with TripleO works really well because OpenStack provides powerful services to assist in container deployment and management. Specifically, TripleO is advantageous because of services like Ironic (the bare metal provisioning service) and Heat (the orchestration service), which provide a strong management backbone for your cloud. Also, containers are an integral piece of this system, as they provide a simple and granular way to perform lifecycle management for your cloud. From my work, it is clear that the cohesive relationship between containers and TripleO can create a new and improved avenue to deploy the cloud in a unique way to implement get your cloud working the way that you see fit.

TripleO is a fantastic project, and with the integration of containers I’m hoping to energize and continue building the community around the project. Using our integration as a proof of the project’s capabilities, we have shown that using TripleO provides an excellent management infrastructure underneath your cloud that allows for projects to be properly managed and grow.

 

[1]          https://github.com/stackforge/kolla/commit/dcb607d3690f78209afdf5868dc3158f2a5f4722

[2]          https://docs.docker.com/reference/commandline/cli/#restart-policies

[3]          https://github.com/stackforge/kolla/blob/master/docker/nova-compute/nova-compute-data/Dockerfile#L4-L5

[4]          https://www.rdoproject.org/Deploying_RDO_using_Instack

[5]          http://www.projectatomic.io/

[6]          https://github.com/rabi/heat-templates/blob/boot-config-atomic/hot/software-config/heat-docker-agents/Dockerfile

[7]          http://git.openstack.org/cgit/openstack/TripleO-puppet-elements/tree/elements/puppet-modules/source-repository-puppet-modules

[8]          https://blueprints.launchpad.net/ironic/+spec/whole-disk-image-support

[9]          https://github.com/stackforge/kolla/commit/08bd99a50fcc48539e69ff65334f8e22c4d25f6f

Survey: OpenStack users value portability, support, and complementary open source tools

by ghaff — June 8, 2015

75 percent of the respondents in a recent survey [1] conducted for Red Hat said that being able to move OpenStack workloads to different providers or platforms was important (ranked 4 or 5 out of 5)–and a mere 5 percent said that this question was of least importance. This was just one of the answers that highlighted a general desire to avoid proprietary solutions and lock-in.

For example, a minority (47 percent) said that differentiated vendor-specific management and other tooling was important while a full 75 percent said that support for complementary open source cloud management, operating system, and development tools was. With respect to management specifically, only 22 percent plan to use vendor-specific tools to manage their OpenStack environments. By contrast, a majority (51 percent) plan to use the tools built into OpenStack–in many cases complemented by open source configuration management (31 percent) and cloud management platforms (21 percent). It’s worth noting though that 42 percent of those asked about OpenStack management tools said that they were unsure/undecided, indicating that there’s still a lot of learning to go on with respect to cloud implementations in general.

This last point was reinforced by the fact that 68 percent said that the availability of training and services from the vendor to on-ramp their OpenStack project was important. (Red Hat offers a Certified System Administrator in Red Hat OpenStack certification as well as a variety of solutions to build clouds through eNovance by Red Hat.) 45 percent also cited lack of internal IT skills as a barrier to adopting OpenStack. Other aspects of commercial support were valued as well. For example, 60 percent said that hardware and software certifications are important and a full 82 percent said that production-level technical support was.

Read the full post »

OPNFV Arno hits the streets

by Dave Neary, NFV/SDN Community Strategist, Red Hat — June 5, 2015

The first release of the OPNFV project, Arno, is now available. The release, named after the Italian river which flows through the city of Florence on its way to the Mediterranean Sea, is the result of significant industry collaboration, starting from the creation of the project in October 2014.

This first release establishes a strong foundation for us to work together to create a great platform for NFV. We have multiple hardware labs, running multiple deployments of OpenStack and OpenDaylight, all deployed with one-step, automated deployment tools. A set of automated tests validate that deployments are functional, and provide a framework for the addition of other tests in the future. Finally, we have a good shared understanding of the problem space, and have begun to engage with upstream projects like OpenDaylight and OpenStack to communicate requirements and propose feature additions to satisfy them.

A core value of OPNFV is “upstream first” – the idea that changes required to open source projects for NFV should happen with the communities in those projects. This is a core value for Red Hat too, and we have been happy to take a leadership role in coordinating the engagement of OPNFV members in projects like OpenDaylight and OpenStack. Red Hat engineers Tim Rozet and Dan Radez have taken a leadership role in putting together one of the two deployment options for OPNFV Arno, the Foreman/Quickstack installer, based on CentOS, RDO and OpenDaylight packages created by another Red Hat engineer, Daniel Farrell. We have been proud to play a significant part, with other members of the OPNFV community, in contributing to this important mission.

Read the full post »

Public vs Private, Amazon compared to OpenStack

by Jonathan Gershater — May 13, 2015

Public vs Private, Amazon Web Services EC2 compared to OpenStack®

How to choose a cloud platform and when to use both

The public vs private cloud debate is a path well trodden. While technologies and offerings abound, there is still confusion among organizations as to which platform is suited for their agile needs. One of the key benefits to a cloud platform is the ability to spin up compute, networking and storage quickly when users request these resources and similarly decommission when no longer required. Among public cloud providers, Amazon has a market share ahead of Google, Microsoft and others. Among private cloud providers, OpenStack® presents a viable alternative to Microsoft or VMware.

This article compares Amazon Web Services EC2 and OpenStack® as follows:

  • What technical features do the two platforms provide?
  • How do the business characteristics of the two platforms compare?
  • How do the costs compare?
  • How to decide which platform to use and how to use both

OpenStack® and Amazon Web Services (AWS) EC2 defined

From  OpenStack.org “OpenStack software controls large pools of compute, storage, and networking resources throughout a datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.”

From AWS “Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers..”

Technical comparison of OpenStack® and AWS EC2

The tables below name and briefly describe the feature in OpenStack® and AWS. 

Read the full post »

The Age of Cloud File Services

by Sean Cohen, Principal Technical Product Manager, Red Hat — May 11, 2015

The new OpenStack Kilo upstream release that became available on April 30, 2015 marks a significant milestone for the Manila project for shared file system service for OpenStack with an increase in development capacity and extensive vendors adoption. This project was kicked off 3 years ago and became incubated during 2014 and now moves to the front of the stage at the upcoming OpenStack Vancouver Conference taking place this month with customer stories of Manila deployments in Enterprise and Telco environments.

storage-roomThe project was originally sponsored and accelerated by NetApp and Red Hat and has established a very rich community that includes code contribution fromcompanies such as EMC, Deutsche Telekom, HP, Hitachi, Huawei, IBM, Intel, Mirantis and SUSE.

The momentum of cloud shared file services is not limited to the OpenStack open source world. In fact, last month at the AWS Summit in San Francisco, Amazon announced it new Shared File Storage for Amazon EC2, The Amazon Elastic File System also known for EFS. This new storage service is an addition to the existing AWS storage portfolio, Amazon Simple Storage Service (S3) for object storage, Amazon Elastic Block Store (EBS) for block storage, and Amazon Glacier for archival, cold storage.

The Amazon EFS provides a standard file system semantics and is based on NFS v4 that allows the EC2 instances to access file system at the same time, providing a common data source for a wide variety of workloads and applications that are shared across thousands of instances. It is designed for broad range of use cases, such as Home directories, Content repositories, Development environments and big data applications. Data uploaded to EFS is automatically replicated to different availability zones, and because EFS file systems are SSD-based, there should be few latency and throughput related problems with the service. The Amazon EFS file system as a service allows users to create and configure file systems quickly with no minimum fee or setup cost, and customers pay only for the storage used by the file system based on elastic storage capacity that automatically grows and shrinks when adding and removing files on demand.

Read the full post »

What’s Coming in OpenStack Networking for the Kilo Release

by Nir Yechiel

KiloOpenStack  Kilo, the 11th release of the open source project, was officially released in April, and now is a good time to review some of the changes we saw in the OpenStack Networking (Neutron) community during this cycle, as well as some of the key new networking features introduced in the project.

Scaling the Neutron development community

The Kilo cycle brings two major efforts which are meant to better expand and scale the Neutron development community: core plugin decomposition and advanced services split. These changes should not directly impact OpenStack users but are expected to reduce code footprint, improve feature velocity, and ultimately bring faster innovation speed. Let’s take a look at each individually:

Neutron core plugin decomposition

Neutron, by design, has a pluggable architecture which offers a custom backend implementation of the Networking API. The plugin is a core piece of the deployment and acts as the “glue” between the logical API and the actual implementation. As the project evolves, more and more plugins were introduced, coming from open-source projects and communities (such as Open vSwitch and OpenDaylight), as well as from various vendors in the networking industry (like Cisco, Nuage, Midokura and others). At the beginning of the Kilo cycle, Neutron had dozens of plugins and drivers span from core plugins, ML2 mechanism drivers, L3 service plugins, and L4-L7 service plugins for FWaaS, LBaaS and VPNaaS – the majority of those included directly within the Neutron project repository. The amount of code required to review across those drivers and plugins was growing to the point where it was no longer scaling. The expectation that core Neutron reviewers review code which they had no knowledge of, or could not test due to lack of proper hardware or software setup, was not realistic. This also caused some frustration among the vendors themselves, who sometimes failed to get their plugin code merged on time.

Read the full post »

Driving in the Fast Lane – CPU Pinning and NUMA Topology Awareness in OpenStack Compute

by Steve Gordon, Product Manager, Red Hat — May 5, 2015

The OpenStack Kilo release, extending upon efforts that commenced during the Juno cycle, includes a number of key enhancements aimed at improving guest performance. These enhancements allow OpenStack Compute (Nova) to have greater knowledge of compute host layout and as a result make smarter scheduling and placement decisions when launching instances. Administrators wishing to take advantage of these features can now create customized performance flavors to target specialized workloads including Network Function Virtualization (NFV) and High Performance Computing (HPC).

What is NUMA topology?

Historically, all memory on x86 systems was equally accessible to all CPUs in the system. This resulted in memory access times that were the same regardless of which CPU in the system was performing the operation and was referred to as Uniform Memory Access (UMA).

In modern multi-socket x86 systems system memory is divided into zones (called cells or nodes) and associated with particular CPUs. This type of division has been key to the increasing performance of modern systems as focus has shifted from increasing clock speeds to adding more CPU sockets, cores, and – where available – threads. An interconnect bus provides connections between nodes, so that all CPUs can still access all memory. While the memory bandwidth of the interconnect is typically faster than that of an individual node it can still be overwhelmed by concurrent cross node traffic from many nodes. The end result is that while NUMA facilitates faster memory access for CPUs local to the memory being accessed, memory access for remote CPUs is slower.

Read the full post »

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part II: Walking Through the Implementation

by Itzik Brown, QE Engineer focusing on OpenStack Neutron, Red Hat — April 29, 2015
and Nir Yechiel

In the previous blog post in this series we looked at what single root I/O virtualization (SR-IOV) networking is all about and we discussed why it is an important addition to Red Hat Enterprise Linux OpenStack Platform. In this second post we would like to provide a more detailed overview of the implementation, some thoughts on the current limitations, as well as what enhancements are being worked on in the OpenStack community.

Note: this post does not intend to provide a full end to end configuration guide. Customers with an active subscription are welcome to visit the official article covering SR-IOV Networking in Red Hat Enterprise Linux OpenStack Platform 6 for a complete procedure.

 

Setting up the Environment

In our small test environment we used two physical nodes: one serves as a Compute node for hosting virtual machine (VM) instances, and the other serves as both the OpenStack Controller and Network node. Both nodes are running Red Hat Enterprise Linux 7.

Read the full post »

OpenStack Summit Vancouver: Agenda Confirms 40+ Red Hat Sessions

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — April 2, 2015

As this Spring’s OpenStack Summit in Vancouver approaches, the Foundation has now posted the session agenda, outlining the final schedule of events. I am very pleased to report that Red Hat and eNovance have more than 40 approved sessions that will be included in the weeks agenda, with a few more approved as joint partner sessions, and even a few more as waiting alternates.

This vote of confidence confirms that Red Hat and eNovance continue to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in and concerned with.

Red Hat is also a headline sponsor in Vancouver this Spring, along with Intel, SolidFire, and HP, and will have a dedicated keynote presentation, along with the 40+ accepted sessions. To learn more about Red Hat’s accepted sessions, have a look at the details below. Be sure to visit us at the below sessions and at our booth (#H4). We look forward to seeing you in Vancouver in May!

For more details on each session, click on the title below:

Read the full post »

An ecosystem of integrated cloud products

by Jonathan Gershater — March 27, 2015

In my prior post, I described how OpenStack from Red Hat frees  you to pursue your business with the peace of mind that your cloud is secure and stable. Red Hat has several products that enhance OpenStack to provide cloud management, virtualization, a developer platform, and scalable cloud storage.

Cloud Management with Red Hat CloudForms            

CloudForms contains three main components

  • Insight – Inventory, Reporting, Metrics red-hat-cloudforms-logo
  • Control – Eventing, Compliance, and State Management
  • Automate – Provisioning, Reconfiguration, Retirement, and Optimization

Read the full post »

An OpenStack Cloud that frees you to pursue your business

by Jonathan Gershater — March 26, 2015

As your IT evolves toward an open, cloud-enabled data center, you can take advantage of OpenStack’s benefits: broad industry support, vendor neutrality, and fast-paced innovation.

As you move into implementation, your requirements for an OpenStack solutions shares a familiar theme: enterprise-ready, fully supported, and seamlessly-integrated products.

Can’t we just install and manage OpenStack ourselves?

OpenStack is an open source project and freely downloadable. To install and maintain OpenStack you need to recruit and retain engineers trained in Python and other technologies. If you decide to go it alone consider:

  1. How do you know OpenStack works with your hardware?
  2. Does OpenStack work with your guest instances?
  3. How do you manage and upgrade OpenStack?
  4. When you encounter problems, consider how you would solve them? Some examples:

Read the full post »

Co-Engineered Together: OpenStack Platform and Red Hat Enterprise Linux

by Arthur Berezin — March 23, 2015

OpenStack is not a software application that just runs on top of any random Linux. OpenStack is tightly coupled to the operating system it runs on and choosing the right Linux  operating system, as well as an OpenStack platform, is critical to provide a trusted, stable, and fully supported OpenStack environment.

OpenStack is an Infrastructure-as-a-Service cloud management platform, a set of software tools, written mostly in Python, to manage hosts at large scale and deliver an agile, cloud-like infrastructure environment, where multiple virtual machine Instances, block volumes and other infrastructure resources can be created and destroyed rapidly on demand.

ab 1 Read the full post »

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

by Nir Yechiel — March 5, 2015

Red Hat Enterprise Linux OpenStack Platform 6: SR-IOV Networking – Part I: Understanding the Basics

Red Hat Enterprise Linux OpenStack Platform 6 introduces support for single root I/O virtualization (SR-IOV) networking. This is done through a new SR-IOV mechanism driver for the OpenStack Networking (Neutron) Modular Layer 2 (ML2) plugin, as well as necessary enhancements for PCI support in the Compute service (Nova).

In this blog post I would like to provide an overview of SR-IOV, and highlight why SR-IOV networking is an important addition to RHEL OpenStack Platform 6. We will also follow up with a second blog post going into the configuration details, describing the current implementation, and discussing some of the current known limitations and expected enhancements going forward.

Read the full post »

A Closer Look at RHEL OpenStack Platform 6

by Steve Gordon, Product Manager, Red Hat — February 24, 2015

Last week we announced the release of Red Hat Enterprise Linux OpenStack Platform 6, the latest version of our cloud solution providing a foundation for production-ready cloud. Built on Red Hat Enterprise Linux 7 this latest release is intended to provide a foundation for building OpenStack-powered clouds for advanced cloud users. Lets take a deeper dive into some of the new features on offer!

IPv6 Networking Support

IPv6 is a critical part of the promise of the cloud. If you want to connect everything to the network, you better plan for massive scale and have enough addresses to use. IPv6 is also increasingly important in the network functions virtualization (NFV) and telecommunication service provider space.

This release introduces support for IPv6 address assignment for tenant instances including those that are connected to provider networks; while IPv4 is more straight forward when it comes to IP address assignment, IPv6 offers some more flexibility and options to choose from. Both stateful and stateless DHCPv6 are supported, as well as the ability to use Stateless Address Autoconfiguration (SLAAC).

Read the full post »

Accelerating OpenStack adoption: Red Hat Enterprise Linux OpenStack Platform 6!

by Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat — February 19, 2015

On Tuesday February 17th, we announced the general availability of Red Hat Enterprise Linux OpenStack Platform 6, Red Hat’s fourth release of the commercial OpenStack offering to the market.

Based on the community OpenStack “Juno” release and co-engineered with Red Hat Enterprise Linux 7, the enterprise-hardened Version 6 is aimed at accelerating the adoption of OpenSack among enterprise businesses, telecommunications companies, Internet service providers (ISPs), and public cloud hosting providers.

Since the first version released in July 2013, the “design principles” of Red Hat Enterprise Linux OpenStack Platform product offering are:

Read the full post »

Red Hat Enterprise Virtualization 3.5 transforms modern data centers that are built on open standards

by Raissa Tona, Principal Product Marketing Manager, Red Hat — February 13, 2015

This week we announced the general availability of Red Hat Enterprise Virtualization 3.5. Red Hat Enterprise Virtualization 3.5 allows organizations to deploy an IT infrastructure that services traditional virtualization workloads while building a solid base for modern IT technologies.

Because of its open standards roots, Red Hat Enterprise Virtualization 3.5 enables IT organizations to more rapidly deliver and deploy transformative and flexible technology services in 3 ways:

  • Deep integration with Red Hat Enterprise Linux
  • Delivery of standardized services for mission critical workloads
  • Foundation for future looking, innovative, and highly flexible cloud enabled workloads built on OpenStack

Deep integration with Red Hat Enterprise Linux

Red Hat Enterprise Virtualization 3.5 is co-engineered with Red Hat Enterprise Linux including the latest version, Red Hat Enterprise Linux 7, which is built to meet modern data center and next-generation IT requirements. Due to this tight integration, Red Hat Enterprise Virtualization 3.5 inherits the innovation capabilities of the world’s leading enterprise Linux platform.

Read the full post »

IBM and Red Hat Join Forces to Power Enterprise Virtualization

by adamjollans — December 16, 2014

Adam Jollans is the Program Director  for Cross-IBM Linux and Open Virtualization Strategy
IBM Systems & Technology Group

IBM and Red Hat have been teaming up for years. Today, Red Hat and IBM are announcing a new collaboration to bring Red Hat Enterprise Virtualization to IBM’s next-generation Power Systems through Red Hat Enterprise Virtualization for Power.

A little more than a year ago, IBM announced a commitment to invest $1 billion in new Linux and open source technologies for Power Systems. IBM has delivered on that commitment with the next-generation Power Systems servers incorporating the POWER8 processor which is available for license and open for development through the OpenPOWER Foundation. Designed for Big Data, the new Power Systems can move data around very efficiently and cost-effectively. POWER8’s symmetric multi-threading provides up to 8 threads per core, enabling workloads to exploit the hardware for the highest level of performance.

Red Hat Enterprise Virtualization combines hypervisor technology with a centralized management platform for enterprise virtualization. Red Hat Enterprise Virtualization Hypervisor, built on the KVM hypervisor, inherits the performance, scalability, and ecosystem of the Red Hat Enterprise Linux kernel for virtualization. As a result, your virtual machines are powered by the same high-performance kernel that supports your most challenging Linux workloads. Read the full post »

Co-Existence of Containers and Virtualization Technologies

by Federico Simoncelli — November 20, 2014

By, Federico Simoncelli, Principal Software Engineer, Red Hat

As a software engineer working on the Red Hat Enterprise Virtualization (RHEV), my team and I are driven by innovation; we are always looking for cutting edge technologies to integrate into our product.

Lately there has been a growing interest in Linux containers solutions such as Docker. Docker provides an open and standardized platform for developers and sysadmins to build, ship, and run distributed applications. The application images can be safely held in your organization registry or they can be shared publicly in the docker hub portal (http://registry.hub.docker.com) for everyone to use and to contribute to.

Linux containers are a well-known technology that runs isolated Linux systems on the same host sharing the same kernel and resources as cpu time and memory. Containers are more lightweight, perform better and allow more density of instances compared to full virtualization where virtual machines run dedicated full kernels and operating systems on top of virtualized hardware. On the other hand virtual machines are still the preferred solution when it comes to running highly isolated workloads or different operating systems than the host.

Read the full post »

Empowering OpenStack Cloud Storage: OpenStack Juno Release Storage Overview

by Sean Cohen, Principal Technical Product Manager, Red Hat — November 19, 2014
Wind Energy

 License: CC0 Public Domain

The OpenStack 10th release added ten new storage backends and improved testing on third-party storage systems. The Cinder block storage project continues to mature each cycle exposing more and more Enterprise cloud storage infrastructure functionalities.

Here is a quick overview of some of these key features.

Simplifying OpenStack Disaster Recovery with Volume Replication

After introducing a new Cinder Backup API to allow export and import backup service metadata in the Icehouse release, which allowed “electronic tape shipping” style backup-export & backup-import capabilities to recover OpenStack cloud deployments, the next step for Disaster Recovery enablement in OpenStack is the foundation of volume replication support at block level.

Read the full post »

Simplifying and Accelerating the Deployment of OpenStack Network Infrastructure

by Valentina — November 18, 2014

plumgrid logo

RHOSCIPN_logo_small

The energy from the latest OpenStack Summit in Paris is still in the air. Its record attendance and vibrant interactions are a testimony of the maturity and adoption of OpenStack across continents, verticals and use cases.

It’s especially exciting to see its applications growing outside of core datacenter use cases with Network Function Virtualization being top of mind for many customers present at the Summit.

If we look back at the last few years, a fundamental role fueling OpenStack adoption has been played by the distributions which have taken the project OpenStack and helped turn it into an easy to consume, supported, enterprise-grade product.

At PLUMgrid we have witnessed this transformation summit after summit, customer deployment after customer deployment. Working closely with our customers and our OpenStack partners we can attest how much easier, smoother, simpler an OpenStack deployment is today.

Similarly, PLUMgrid wants to simplify and accelerate the deployment of OpenStack network infrastructure, especially for those customers that are going into production today and building large-scale environments.

If you had the pleasure to be at the summit you have learnt about all the new features that were introduced in Juno for the OpenStack networking component (and if not check out this blog which provides a good summary of all Juno’s networking feature).

Read the full post »

Delivering Public Cloud Functionality in OpenStack

by John Meadows, Vice President of Business Development, Talligent — November 14, 2014

Talligent-logo

RHOSCIPN_logo_small

When it comes to delivering cloud services, enterprise architects have a common request to create a public cloud-type rate plan for showback, chargeback, or billing. Public cloud packaging is fairly standardized across the big vendors as innovations are quickly copied by others and basic virtual machines are assessed mainly on price. (I touched on the concept of the ongoing price changes and commoditization of public clouds in an earlier post.) Because of this standardization and relative pervasiveness, public cloud rate plans are well understood by cloud consumers. This makes them a good model for introducing enterprise users to new cloud services built on OpenStack.Enterprise architects are also highly interested in on-demand, self-service functionality from their Openstack clouds in order to imitate the immediate response of public clouds. We will cover how to deliver on-demand cloud services in a future post.

Pricing and Packaging Cloud Services
Public cloud rate plans are very popular, seeing adoption within enterprises, private hosted clouds, and newer public cloud providers alike. Most public cloud providers use the typical public cloud rate plan as a foundation for layering on services, software, security, and intangibles like reputation to build up differentiated offerings.Enterprise cloud architects use similar rate plans to demonstrate to internal customers that they can provide on-demand, self-service cloud services at a competitive price. To manage internal expectations and encourage good behavior, enterprises usually introduce cloud pricing via a showback model which does not directly impact budgets or require exchange of money. Users learn cloud cost structures and the impact of their resource usage. Later, full chargeback can be applied where internal users are expected to pay for services provided.

Read the full post »

OpenStack 2015 – The Year of the Enterprise?

by Nir Yechiel — November 10, 2014

OpenStackSummit Paris 2014This post is the collective work of all the Red Hat Enterprise Linux OpenStack Platform Product Managers who attended the summit.

The 11th Openstack design summit that took place last week for the first time in Europe, brought about 6000 participants of the OpenStack community to Paris to kick off the design for the “Kilo” release.

If 2014 was the year of the “Superuser”, then clearly the year 2015 seems to be about the “Year of the Enterprise“.  The big question is: are we ready for enterprise mass adoption?

More than year ago, at the Openstack Havana design summit, it was clear that although interest in deploying OpenStack was growing, most enterprises were still holding back, mainly due to the lack of maturity of the project. This OpenStack summit, the new cool kid in the Open Cloud infrastructure playground is finally starting to show real maturity signs.

An important indicator for this is the increased number of deployments. The Kilo summit showcased about 16 different large organizations using production workloads on OpenStack, including companies such as BBVA Bank, SAP SE (formerly SAP AG) & BMW.

Read the full post »

OpenStack Summit – Why NFV Really Matters

by David H. Deans — November 6, 2014

I’ve been following the news releases and other storylines that have emerged from the ongoing proceedings at the OpenStack Summit in Paris, France. Some key themes have surfaced. In my first editorial, I shared reasons why the market has matured. In my second story, I observed how simplification via automation would broaden the addressable market for hybrid cloud services.

The other key theme that has emerged is the increased focus on telecom network operator needs and wants – specifically, the primary telco strategies that are evolving as they continue to build-out their hyperscale cloud infrastructures.

This is my domain. I’ve invested most of my professional life working for, or consulting with, domestic and international communication service providers. I’ve been actively involved in the business development of numerous wireline and wireless services, within both the consumer and commercial side of the marketplace. During more than two decades of experience, it’s been an amazing journey.

The closely related Technology, Media and Telecommunications (TMT) industries are already undergoing a transformation, as innovative products or services are developed by collaborative teams of creative contributors and brought to market at an accelerated rate.

Read the full post »

Follow

Get every new post delivered to your Inbox.

Join 69 other followers