OpenStack provides scale and redundancy at the infrastructure layer to provide high availability for applications built for operation in a horizontally scaling cloud computing environment. It has been designed for applications that are “designed for failure” and voluntarily excluded features that would enable traditional enterprise applications, in fear of limiting its’ scalability and corrupting its initial goals. These traditional enterprise applications demand continuous operation, and fast, automatic recovery in the event of an infrastructure level failure. While an increasing number of enterprises look to OpenStack as providing the infrastructure platform for their forward-looking applications they are also looking to simplify operations by consolidating their legacy application workloads on it as well.
As part of the On-Ramp to Enterprise OpenStack program, Red Hat, in collaboration with Intel, Cisco and Dell, have been working on delivering a high availability solution for such enterprise workloads running on top of OpenStack. This work provides an initial implementation of the instance high availability proposal that we put forward in the past and is included in the recently released Red Hat Enterprise Linux OpenStack Platform 7.
Continue reading “Highly available virtual machines in RHEL OpenStack Platform 7”
In our recent blog post, we’ve discussed the steps involved in determining the performance and scalability of a Red Hat Enterprise Linux OpenStack Platform environment. To recap, we’ve recommended the following:
- Validate the underlying hardware performance using AHC
- Deploy Red Hat Enterprise Linux OpenStack Platform
- Validate the newly deployed infrastructure using Tempest
- Run Rally with specific scenarios that stress the control plane of OpenStack environment
- Run CloudBench (cbtool) experiments that stress applications running in virtual machines within OpenStack environment
In this post, we would like to focus on step 4: Running Rally with a specific scenario to stress the control plane of the OpenStack environment. The main objectives are:
Continue reading “Analyzing the performance of Red Hat Enterprise Linux OpenStack Platform using Rally”
In a previous “Driving in the Fast Lane” blog post we focused on optimization of instance CPU resources. This time around let’s take a dive into the handling of system memory, and more specifically configurable page sizes. We will reuse the environment from the previous post, but add huge page support to our performance flavor.
What are Pages?
Physical memory is segmented into a series of contiguous regions called pages. For efficiency instead of accessing individual bytes of memory one by one the system retrieves memory by accessing entire pages. Each page contains a number of bytes, referred to as the page size. To do this though the system must first translate virtual addresses into physical addresses to determine which page contains the requested memory.
Continue reading “Driving in the Fast Lane: Huge Page support in OpenStack Compute”
As this Fall’s OpenStack Summit in Tokyo approaches, the Foundation has posted the session agenda, outlining the final schedule of events. I am happy to report that Red Hat has nearly 20 sessions that will be included in the weeks agenda, along with a few more as waiting alternates. With the limited space and shortened event this time around, I am please to see that Red Hat continues to remain in sync with the current topics, projects, and technologies the OpenStack community and customers are most interested in.
Red Hat is a Premiere sponsor in Tokyo this Fall and will have a dedicated sponsor presentation, along with our general accepted sessions. To learn more about Red Hat’s accepted sessions, have a look at the details below. Be sure to visit us at the below sessions and at our booth (P7). We look forward to seeing you in Tokyo in October!
For more details on each session, click on the title below:
Continue reading “Red Hat Confirms Speaking Sessions at OpenStack Summit Tokyo”
Organizations that take advantage of comprehensive insights from their data can gain a competitive edge. However, the ever-increasing amount of data coming in can make it hard to see trends. Adding to this challenge, many companies have data locked in silos, making it difficult—if not impossible—to gain critical insights. Big data technologies like Hadoop can help unify and organize data, but getting fast, meaningful insight still isn’t easy.
Organizations consistently face 4 main challenges when trying to implement big data initiatives:
- Setting up and operating a big data and analytics platform
- Attracting, managing, and applying big data and analytics skills
- Integrating insights into business processes
- Iterating quickly
Continue reading “Big data in the open, private cloud”
With so many advantages an Infrastructure-as-a-Service (IaaS) cloud provides businesses, it’s great to see a transformation of IT happening across nearly all industries and markets. Nearly every enterprise is taking advantage of an “as-a-service” cloud in some form or another. And with this new infrastructure, it’s now more important than ever to remember the critical role that management plays within this mix. Oddly enough, it is sometimes considered a second priority when customers begin investigating the benefits of an IaaS cloud, but quickly becomes your first priority when running one.
At Red Hat, we believe that management plays a critical role in this next-generation datacenter. And we believe cloud management should be open, agile, and integrated. Let us explain how we are integrating several critical management capabilities to let you take OpenStack to it’s fullest potential.
Continue reading “Managing OpenStack: Integration Matters!”
Successfully implementing an OpenStack cloud is more than just choosing an OpenStack distribution. With its community approach and rich ecosystem of vendors, OpenStack represents a viable option for cloud administrators who want to offer public-cloud-like infrastructure services in their own datacenter. Red Hat Enterprise Linux OpenStack Platform offers pluggable storage and networking options. This open approach is contrary to closed solutions such as VMware Integrated OpenStack (VIO) which only supports VMware NSX for L4-L7 networking or VMware Distributed switch for basic L2 networking .
Below are some of the networking partners who have certified their OpenStack Networking plugins with Red Hat Enterprise Linux OpenStack Platform and will be on display at VMworld 2015, San Francisco, at the Red Hat booth, 528; (Cisco is at booth 1721). See exhibitor map
Continue reading “How Red Hat’s OpenStack partner Networking Solutions Offer Choice and Performance”
As OpenStack continues to grow into a mainstream Infrastructure-as-a-service (IaaS) platform, the industry seeks to learn more about its performance and scalability for use in production environments. As recently captured in this blog, common questions that typically arise are: “Is my hardware vendor working with my software vendor?”, “How much hardware would I actually need?”, and “What are the best practices for scaling out my OpenStack environment?”
These common questions are often difficult to answer because they rely on environment specifics. With every environment being different, often composed of products from multiple vendors, how does one go about finding answers to these generic questions?
Continue reading “Performance and Scaling your Red Hat Enterprise Linux OpenStack Platform Cloud”
We live in a world that has changed the way it consumes applications. The last few years have seen a rapid rise in the adoption of Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). Much of this can be attributed to the broad success of Amazon Web Services (AWS), which is said to have grown revenue from $3.1B to $5B last year (Forbes). More and more people, enterprise customers included, are consuming applications and resources that require little to no maintenance. And any maintenance that does happen, now goes unnoticed by users. This leaves traditional software vendors contending to find a way to adapt their distribution models to make their software easier to consume. Lengthy, painful upgrades are no longer acceptable to users, forcing vendors to create a solution to this problem.
Let’s face it, the impact of this on traditional software companies are starting to be felt. Their services and methods of doing business are now being compared to a newer, more efficient model. One that is not bogged down by the inefficiencies of the traditional model. SaaS has the advantage that the software runs in their datacenters, where they have easy access to it, control the hardware, the architecture, the configurations, and so on.
Continue reading “Upgrades are dying, don’t die with them”
One of the benefits of OpenStack is the ability to deploy the software on standard x86 hardware, and thus not be locked-in to custom architectures and high prices from specialized vendors.
Before you select your x86 hardware, you might want to consider how you will resolve hardware/software related issues:
- Is my distribution of OpenStack and the underlying Linux, certified to run on the hardware I use?
- Will the vendor of my OpenStack distribution work with my hardware vendor to resolve issues?
There was a panel session (Cisco, Ooyala, Sprint, and Shutterfly) on OpenStack use cases at the OpenStack Summit in Vancouver, May 2015. At the end, an audience member asked “How important is it that the OpenStack distribution is certified to run on the hardware you use?”
Continue reading “How to choose the best-fit hardware for your OpenStack deployment”