In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.
In the final part of our series, we demonstrate how to run those same validations using two new methods: the OpenStack scheduling service, Mistral, and the Red Hat OpenStack director UI.
Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 3”
In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).
Next, in Part 2 we demonstrate how to use that dynamic inventory with included, pre-written Ansible validation playbooks from the command line.
Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 2”
Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.
With recent releases of Red Hat OpenStack Platform access to Ansible is included directly within the Red Hat OpenStack Platform subscription and installed by default with Red Hat OpenStack Platform director.
In this three-part series you’ll learn ways to use Ansible to perform powerful pre and post deployment validations against your Red Hat OpenStack environment, utilizing the special validation scripts that ship with recent Red Hat OpenStack Platform releases.
Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 1”
During the OpenStack summit of May 2015 in Vancouver, the OpenStack Telemetry community team ran a session for operators to provide feedback. One of the main issues operators relayed was the polling that Ceilometer was running on Nova to gather instance information. It had a highly negative impact on the Nova API CPU usage, as it retrieves all the information about instances on regular intervals.
Indeed, it turns out that Nova is not optimizing the retrieval of these bits of information (a few rows in a database), and does not utilize a cache. Fortunately, Nova does provide a way to poll more efficiently with the Changes-Since request parameter.
As a result of this discovery, the Telemetry team built a blueprint named “resource-metadata-caching”, targeting the implementation of a local in-memory cache in Ceilometer, and the use of the Changes-Since parameter. This blueprint has been completed by Jason Myers during the Liberty development cycle and is therefore part of the final version of Ceilometer released for the Liberty cycle.
Continue reading “Ceilometer Polling Performance Improvement”
This is the fifth and final in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fifth question asked:
What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve?
Mary and Gary write that “We expect that as these next-generation environments evolve, conventional and cloud-native infrastructure and development platforms will extend support for each other. As an example, OpenStack was built as a next-generation cloud-native solution, but it is now adding support for some enterprise features.”
This is the one aspect of integration. Today, it’s useful to draw a distinction between conventional and cloud-native infrastructures in part because they often use different technologies and those technologies are changing at different rates. However, as projects/products that are important for many enterprise cloud-native deployments–such as OpenStack–mature, they’re starting to adopt features associated with enterprise virtualization and enterprise management.
Continue reading “Integrating classic IT with cloud-native”
OpenStack provides scale and redundancy at the infrastructure layer to provide high availability for applications built for operation in a horizontally scaling cloud computing environment. It has been designed for applications that are “designed for failure” and voluntarily excluded features that would enable traditional enterprise applications, in fear of limiting its’ scalability and corrupting its initial goals. These traditional enterprise applications demand continuous operation, and fast, automatic recovery in the event of an infrastructure level failure. While an increasing number of enterprises look to OpenStack as providing the infrastructure platform for their forward-looking applications they are also looking to simplify operations by consolidating their legacy application workloads on it as well.
As part of the On-Ramp to Enterprise OpenStack program, Red Hat, in collaboration with Intel, Cisco and Dell, have been working on delivering a high availability solution for such enterprise workloads running on top of OpenStack. This work provides an initial implementation of the instance high availability proposal that we put forward in the past and is included in the recently released Red Hat Enterprise Linux OpenStack Platform 7.
Continue reading “Highly available virtual machines in RHEL OpenStack Platform 7”
In our recent blog post, we’ve discussed the steps involved in determining the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform environment. To recap, we’ve recommended the following:
- Validate the underlying hardware performance using AHC
- Deploy Red Hat Enterprise Linux OpenStack Platform
- Validate the newly deployed infrastructure using Tempest
- Run Rally with specific scenarios that stress the control plane of OpenStack environment
- Run CloudBench (cbtool) experiments that stress applications running in virtual machines within OpenStack environment
In this post, we would like to focus on step 4: Running Rally with a specific scenario to stress the control plane of the OpenStack environment. The main objectives are:
Continue reading “Analyzing the performance of Red Hat Enterprise Linux OpenStack Platform using Rally”
In a previous “Driving in the Fast Lane” blog post we focused on optimization of instance CPU resources. This time around let’s take a dive into the handling of system memory, and more specifically configurable page sizes. We will reuse the environment from the previous post, but add huge page support to our performance flavor.
What are Pages?
Physical memory is segmented into a series of contiguous regions called pages. For efficiency instead of accessing individual bytes of memory one by one the system retrieves memory by accessing entire pages. Each page contains a number of bytes, referred to as the page size. To do this though the system must first translate virtual addresses into physical addresses to determine which page contains the requested memory.
Continue reading “Driving in the Fast Lane: Huge Page support in OpenStack Compute”
Organizations that take advantage of comprehensive insights from their data can gain a competitive edge. However, the ever-increasing amount of data coming in can make it hard to see trends. Adding to this challenge, many companies have data locked in silos, making it difficult—if not impossible—to gain critical insights. Big data technologies like Hadoop can help unify and organize data, but getting fast, meaningful insight still isn’t easy.
Organizations consistently face 4 main challenges when trying to implement big data initiatives:
- Setting up and operating a big data and analytics platform
- Attracting, managing, and applying big data and analytics skills
- Integrating insights into business processes
- Iterating quickly
Continue reading “Big data in the open, private cloud”
One of the benefits of OpenStack is the ability to deploy the software on standard x86 hardware, and thus not be locked-in to custom architectures and high prices from specialized vendors.
Before you select your x86 hardware, you might want to consider how you will resolve hardware/software related issues:
- Is my distribution of OpenStack and the underlying Linux, certified to run on the hardware I use?
- Will the vendor of my OpenStack distribution work with my hardware vendor to resolve issues?
There was a panel session (Cisco, Ooyala, Sprint, and Shutterfly) on OpenStack use cases at the OpenStack Summit in Vancouver, May 2015. At the end, an audience member asked “How important is it that the OpenStack distribution is certified to run on the hardware you use?”
Continue reading “How to choose the best-fit hardware for your OpenStack deployment”