G’Day OpenStack!

In less than one week the OpenStack Summit is coming to Sydney! For those of us in the Australia/New Zealand (ANZ) region this is a very exciting time as we get to showcase our local OpenStack talents and successes. This summit will feature Australia’s largest banks, telcos, and enterprises and show the world how they have adopted, adapted, and succeeded with Open Source software and OpenStack.

frances-gunn-41736
Photo by Frances Gunn on Unsplash

Continue reading “G’Day OpenStack!”

Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part One

An exciting new feature in Red Hat OpenStack Platform 11 is full Red Hat OpenStack Platform director support for deploying Red Hat Ceph storage directly on your overcloud compute nodes. Often called hyperconverged, or HCI (for Hyperconverged Infrastructure), this deployment model places the Red Hat Ceph Storage Object Storage Daemons (OSDs) and storage pools directly on the compute nodes.

Co-locating Red Hat Ceph Storage in this way can significantly reduce both the physical and financial footprint of your deployment without requiring any compromise on storage.

opwithtoolsinside

Continue reading “Using Red Hat OpenStack Platform director to deploy co-located Ceph storage – Part One”

Using Ansible Validations With Red Hat OpenStack Platform – Part 3

In the previous two blogposts (Part 1 and Part 2) we demonstrated how to create a dynamic Ansible inventory file for a running OpenStack cloud. We then used that inventory to run Ansible-based validations with the ansible-playbook command from the CLI.

In the final part of our series, we demonstrate how to run those same validations using two new methods: the OpenStack scheduling service, Mistral, and the Red Hat OpenStack director UI.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 3”

Using Ansible Validations With Red Hat OpenStack Platform – Part 2

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).

Next, in Part 2 we demonstrate how to use that dynamic inventory with included, pre-written Ansible validation playbooks from the command line.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 2”

Using Ansible Validations With Red Hat OpenStack Platform – Part 1

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.

With recent releases of Red Hat OpenStack Platform access to Ansible is included directly within the Red Hat OpenStack Platform subscription and installed by default with Red Hat OpenStack Platform director.

In this three-part series you’ll learn ways to use Ansible to perform powerful pre and post deployment validations against your Red Hat OpenStack environment, utilizing the special validation scripts that ship with recent Red Hat OpenStack Platform releases.

opwithtoolsinside

Continue reading “Using Ansible Validations With Red Hat OpenStack Platform – Part 1”

Ceilometer Polling Performance Improvement

During the OpenStack summit of May 2015 in Vancouver, the OpenStack Telemetry community team ran a session for operators to provide feedback. One of the main issues operators relayed was the polling that Ceilometer was running on Nova to gather instance information. It had a highly negative impact on the Nova API CPU usage, as it retrieves all the information about instances on regular intervals.

Indeed, it turns out that Nova is not optimizing the retrieval of these bits of information (a few rows in a database), and does not utilize a cache. Fortunately, Nova does provide a way to poll more efficiently with the Changes-Since request parameter.

As a result of this discovery, the Telemetry team built a blueprint named “resource-metadata-caching”, targeting the implementation of a local in-memory cache in Ceilometer, and the use of the Changes-Since parameter. This blueprint has been completed by Jason Myers during the Liberty development cycle and is therefore part of the final version of Ceilometer released for the Liberty cycle.

Continue reading “Ceilometer Polling Performance Improvement”

Integrating classic IT with cloud-native

This is the fifth and final in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fifth question asked:

What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve?

Mary and Gary write that “We expect that as these next-generation environments evolve, conventional and cloud-native infrastructure and development platforms will extend support for each other. As an example, OpenStack was built as a next-generation cloud-native solution, but it is now adding support for some enterprise features.”

This is the one aspect of integration. Today, it’s useful to draw a distinction between conventional and cloud-native infrastructures in part because they often use different technologies and those technologies are changing at different rates. However, as projects/products that are important for many enterprise cloud-native deployments–such as OpenStack–mature, they’re starting to adopt features associated with enterprise virtualization and enterprise management.

Continue reading “Integrating classic IT with cloud-native”

Highly available virtual machines in RHEL OpenStack Platform 7

OpenStack provides scale and redundancy at the infrastructure layer to provide high availability for applications built for operation in a horizontally scaling cloud computing environment. It has been designed for applications that are “designed for failure” and voluntarily excluded features that would enable traditional enterprise applications, in fear of limiting its’ scalability and corrupting its initial goals. These traditional enterprise applications demand continuous operation, and fast, automatic recovery in the event of an infrastructure level failure. While an increasing number of enterprises look to OpenStack as providing the infrastructure platform for their forward-looking applications they are also looking  to simplify operations by consolidating their legacy application workloads on it as well.

As part of the On-Ramp to Enterprise OpenStack program, Red Hat, in collaboration with Intel, Cisco and Dell, have been working on delivering a high availability solution for such enterprise workloads running on top of OpenStack. This work provides an initial implementation of the instance high availability proposal that we put forward in the past and is included in the recently released Red Hat Enterprise Linux OpenStack Platform 7.

Continue reading “Highly available virtual machines in RHEL OpenStack Platform 7”

Analyzing the performance of Red Hat Enterprise Linux OpenStack Platform using Rally

 In our recent blog post, we’ve discussed the steps involved in determining the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform environment. To recap, we’ve recommended the following:

  1. Validate the underlying hardware performance using AHC
  2. Deploy Red Hat Enterprise Linux OpenStack Platform
  3. Validate the newly deployed infrastructure using Tempest
  4. Run Rally with specific scenarios that stress the control plane of OpenStack environment
  5. Run CloudBench (cbtool) experiments that stress applications running in virtual machines within OpenStack environment

In this post, we would like to focus on step 4: Running Rally with a specific scenario to stress the control plane of the OpenStack environment. The main objectives are:

Continue reading “Analyzing the performance of Red Hat Enterprise Linux OpenStack Platform using Rally”

Driving in the Fast Lane: Huge Page support in OpenStack Compute

In a previous “Driving in the Fast Lane” blog post we focused on optimization of instance CPU resources. This time around let’s take a dive into the handling of system memory, and more specifically configurable page sizes. We will reuse the environment from the previous post, but add huge page support to our performance flavor.

What are Pages?

Physical memory is segmented into a series of contiguous regions called pages. For efficiency instead of accessing individual bytes of memory one by one the system retrieves memory by accessing entire pages. Each page contains a number of bytes, referred to as the page size. To do this though the system must first translate virtual addresses into physical addresses to determine which page contains the requested memory.

Continue reading “Driving in the Fast Lane: Huge Page support in OpenStack Compute”

  • Page 1 of 2
  • 1
  • 2
  • >