It seems like it was only yesterday that the OpenStack community found itself gathering in Hong Kong to set the design goals for the Icehouse release. As we entered March development was still progressing at a fever pitch in the lead up to the feature freeze
for the release but now the dust has started to settle and we are able to start getting a real feel for what OpenStack users and operators can look forward to in the Icehouse release.
Today I’ll be giving a sneak peak to just some of the changes made in one of the two projects that made up the original OpenStack release and today is still one of the largest – showing no signs of the innovation slowing down
– OpenStack Compute (Nova)
. OpenStack Compute is a cloud computing fabric controller, a central component of an Infrastructure as a Service (IaaS) system. It is responsible for managing the hypervisors on which virtual machine instances will ultimately run and managing the lifecycle of those virtual machine instances. This list is by no means exhaustive but highlights some key features and the rapid advances made by the contributors that make up the OpenStack community in a six month release cycle.
Libvirt/Kernel-based Virtual Machine (KVM) Driver Enhancements
The OpenStack User Survey results
presented in Hong Kong indicated a whopping 62% of respondents are using the Libvirt/KVM hypervisor to power the compute services offered by their OpenStack clouds. The power of this combination of the Libvirt
virtualization abstraction layer with the performance
offered by the KVM hypervisor has been cemented in the datacenter and now extends to the elastic cloud. OpenStack contributors have continued in the Icehouse release to find new and innovative ways to expose the functionality provided by this combination of technologies for consumption by operators and users of elastic OpenStack clouds, delivering a number of tangible features in the abstraction layer provided by this compute driver:
- It is now possible to add a Virtio RNG device to compute instances to provide increased entropy. Virtio RNG is a paravirtual random number generation device. It allows the compute node to provide entropy to the compute instances in order to fill their entropy pool. The default entropy device used on the host is /dev/random, however, use of a hardware RNG device physically attached to the host is also possible. The use of the Virtio RNG device is enabled using the hw_rng property in the metadata of the image used to build the instance.
- Watchdog support has been added, allowing the triggering of instance lifecycle events based on a crash or kernel panic detected within the instance. The watchdog device used is a i6300esb. It is enabled by setting the hw_watchdog_action property in the image properties or flavor extra specifications to a value other than disabled. Supported hw_watchdog_action property values, which denote the action for the watchdog device to take in the event of detecting an instance failure, are poweroff, reset, pause, and none.
- It is now possible to configure instances to use video driver other than the default (cirros). This allows the specification of different video driver models, different amounts of video RAM, and different numbers of video heads. These values are configured by setting the hw_video_model, hw_video_vram, and hw_video_head properties respectively in the image metadata. Currently supported video driver models are vga, cirrus, vmvga, xen and qxl.
- Modified kernel arguments can now be provided to booting compute instances. The kernel arguments are retrieved from the os_command_line key in the image metadata as stored in the OpenStack Image Service (Glance), if a value for the key was provided. If no value is provided then the default kernel arguments continue to be used.
- VirtIO SCSI (virtio-scsi) can now be used instead of VirtIO Block (virtio-blk) to provide block device access for instances. Virtio SCSI is a para-virtualized SCSI controller device designed as a future successor to VirtIO Block and aiming to provide improved scalability and performance.
- Changes have been made to the expected format of the /etc/nova/nova.conf configuration file with a view to ensuring that all configuration groups in the file use descriptive names. A number of driver specific flags, including those for the Libvirt driver, have also been moved to their own option groups.
Compute API Enhancements
A number of contributors have been hard at work both at extending the Compute v2 API while work also continued on an updated API that might one day displace it, Compute v3. This has been a hot topic in recent weeks as contributors discussed ways to carefully balance the desire for API innovation with the needs of operators and users.
In the meantime development of the Compute v2 API continues, some key extensions and changes made in the Icehouse release are:
- API facilities have been added for defining, listing, and retrieving the details of instance groups. Instance groups provide a facility for grouping related virtual machine instances at boot time and applying policies to determine how they must be scheduled in relation to other members of the group. Currently supported policies are affinity, which indicates all instances in the group should be scheduled to the same host, and anti-affinity, which indicates all instances in the group should be scheduled on separate hosts. Retrieving the details of an instance group using the updated API also returns the list of group members.
- The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously these would continue to be listed even where the compute service had been disabled and the system re-provisioned. This functionality is provided by the ExtendedServicesDelete API extension.
- The Compute API now exposes the hypervisor IP address, allowing it to be retrieved by administrators using the “nova hypervisor-show” command.
- The Compute API currently supports both XML and JSON formats. Support for the XML format has now been marked as deprecated and will be retired in a future release.
Compute support for notifications has been growing with every release, with more and more actions gradually being modified to generate a notification that operators, users, and orchestration systems can capture to track events. The notable notifications added in the Icehouse release are:
- Notifications are now generated when a Compute host is enabled, disabled, powered on, shut down, rebooted, put into maintenance mode and taken out of maintenance mode.
- Notifications are now generated upon the creation and deletion of keypairs.
The compute scheduler is responsible for determining which compute nodes a launched instance will be placed on based on a series of configurable filters and weights. While efforts remain under way to further decouple the scheduler from Nova, this remains an area of rich innovation within the project. In the Icehouse release:
- Modifications have been made to the scheduler to add an extensible framework allowing for it to make decisions based on resource utilization. In coming releases expect to see more development in this space, particular as it is extended to handle specific resource classes.
- An initial experimental implementation of a caching scheduler driver was added. The caching scheduler uses the existing facilities for applying scheduler filters and weights but caches the list of available hosts. When a user request is passed to the caching scheduler it attempts to perform scheduling based on the list of cached hosts, with a view to improving scheduler performance.
- A new scheduler filter, AggregateImagePropertiesIsolation, has been introduced. The new filter schedules instances to hosts based on matching namespaced image properties with host aggregate properties. Hosts that do not belong to any host aggregate remain valid scheduling targets for instances based on all images. The new Compute service configuration keys aggregate_image_properties_isolation_namespace and aggregate_image_properties_isolation_separator are used to determine which image properties are examined by the filter.
During Icehouse, cycle work has continued to facilitate third party testing of hypervisor drivers that live in the OpenStack Compute source tree. This allows third parties to provide continuous integration (CI) infrastructure to run regression tests against each proposed OpenStack Compute patch and record the results so that they can be referred to as the code is reviewed. This ensures not only test coverage for these drivers but valuable additional test coverage of the shared componentry provided by the OpenStack Compute project itself.
The Compute services now allow for a level of rolling upgrade, whereby control services can be upgraded to Icehouse while they continue to interact with compute services running code from the Havana release. This allows for a more gradual approach to upgrading an OpenStack cloud, or logical designated subset thereof, than has typically been possible in the past.
Work on stabilizing Icehouse will go on for some time now before the community gathers again in May for OpenStack Summit 2014 in Atlanta to define the design vision for the next six month release cycle. If you want to help test out some of the above features, why not get involved in the upcoming RDO test day using freshly baked packages based on the third Icehouse release milestone?
Posted by stephenagordon on March 11, 2014
By Keith Basil, Principal Product Manager, Red Hat
As product manager and OpenStack evangelist you may think that the standard response to the question “Is OpenStack for You” is unequivocally “Yes!”.
Well, that’s not necessarily the case here.
To help bring clarity to the question, we’ve developed a webinar that tackles the “when (and when not) to” use OpenStack. In the webinar, we point out the characteristics of applications likely to flourish when used with OpenStack. We also explore various approaches for getting started with OpenStack.
Read the full post »
Posted by Maria Gallegos on February 28, 2014
By Jeff Jameson, Sr. Principal Product Marketing Manager, Red Hat
With the voting polls open for the past week, the OpenStack Foundation is collecting votes for all sessions at this Spring’s OpenStack Summit in Atlanta. Red Hat is doing its part to contribute as many innovative and useful session to the agenda. With a variety of sessions submitted, from low-level discussions on network routing and storage, all the way through real-world success stories that share experiences and lessons learned with deploying an OpenStack cloud, we’ve got a great lineup to offer you.
Each and every vote counts, so if you haven’t already voted, have a look through all the Red Hat submitted sessions and vote for your favorites! Just click on the title to cast your vote. Remember, voting closes on Monday, March 3rd.
Read the full post »
Posted by Maria Gallegos on February 27, 2014
Since the announcement of RDO and Red Hat OpenStack at the Spring 2013 OpenStack Summit, these have arguably become two of the most popular ways to install OpenStack. Both use the puppet-openstack modules to install OpenStack, and are just a sampling of the OpenStack installers that are based on Puppet.
Read the full post here: http://developerblog.redhat.com/2014/01/28/deploy-to-upgrade-puppet-openstack-modules/
Originally posted January 28, 2014, by Christian Hoge.
Posted by Maria Gallegos on February 27, 2014
Originally posted on August 12, 2013, by Tim Burk, vice president, Cloud and Virtualization Development, Red Hat – Part 4 of a 4 part series 
Tim’s earlier posts include:
As described in my earlier posts, it is plain to see that Red Hat is not treating OpenStack as “just” a layered product.
Rather, Red Hat Enterprise Linux OpenStack Platform is the next major evolution in the Red Hat Enterprise Linux family. The tight levels of integration and responsible enterprise grade feature enhancement necessitate this combination. We believe that doing OpenStack right – to make it secure, performant, easy to use, and evolve over time – is only possible by taking a holistic approach.
Read the full post »
Posted by Maria Gallegos on February 25, 2014
Originally posted on August 5, 2013, by Tim Burke, vice president, Cloud and Virtualization Development, Red Hat – Part 3 of a 4 part series 
In my last post, I discussed a small subset of the security, storage, networking, virtualization, and performance optimizations that make the Red Hat Enterprise Linux OpenStack Platform offering technically superior. Yet, as innovation continues in the vibrant upstream OpenStack and Linux communities, Red Hat’s integration work is ongoing. Our subscription model assures that customers will continue to have access to this ongoing stream of innovation – innovation that is made possible through the tight coordination of the Red Hat Enterprise Linux development team, which now includes OpenStack components. The goals of that coordination include:
- Component Integration – There are several parts of OpenStack that have dependencies on specific versions of run-times or system utilities. For example, there are specific networking modules required for software-defined networks (SDNs), specific versions of python run-times, custom Security-Enhanced Linux (SELinux) security policies, and even system tunings for virtualized guest environments. Piecing together the specific versions and making the completed whole function optimally can be a daunting challenge.
Read the full post »
Posted by Maria Gallegos on February 20, 2014
Originally posted on July 24, 2013, by Tim Burke, vice president, Cloud and Virtualization Development, Red Hat – Part 2 of a part 4 series 
OpenStack delivers a highly scalable cloud environment for a variety of applications. But, cloud workloads present new challenges for underlying operating system platforms. The nature of the cloud is to be agile, not static. Virtual machines are quickly created and destroyed in large numbers. Storage and networking need to be flexible and highly performant. Red Hat Enterprise Linux has evolved to match the pace and unique characteristics of cloud deployments and is optimized for OpenStack in several ways, including:
- Security – Cloud environments don’t deploy applications on dedicated hardware. Rather, they deploy multiple virtual machines on top of a pool of generic hardware resources, with virtual machines often sharing the same hardware. In this deployment model, virtual machine isolation is a key security concern. Enter Red Hat Enterprise Linux and the fine-grained permission enforcement afforded by Security-Enhanced Linux (SELinux) at the file, network and user levels. In Red Hat Enterprise Linux OpenStack Platform, SELinux enforces specific policies that are unique to the needs of OpenStack, such as enabling OpenStack to configure network namespaces which utilize Openstack’s network services. The benefit of SELinux is to prevent different virtual guests from accessing network ports and connections maliciously. In this way, the security inherent in Red Hat Enterprise Linux enhances the security of OpenStack cloud environment.
Read the full post »
Posted by Maria Gallegos on February 18, 2014
Originally posted on July 18, 2013 by Tim Burke, vice president, Cloud and Virtualization Development, Red Hat – Part 1 of a 4 part series 
Throughout its history, Red Hat Enterprise Linux has been been transformative in the information technology infrastructure platform arena. It was founded on the principles of bringing stability and a longer lifecycle required by commercial IT organizations to the rapidly changing, community-developed Linux operating system. This unleashed a wave of commoditized computing as Red Hat Enterprise Linux displaced expensive proprietary UNIX offerings, delivering customers lower costs and freedom from vendor lock-in.
The next wave of Red Hat Enterprise Linux focused on being first in the industry to offer the highest levels of security built into the mainstream product rather than being an obscure offshoot. This focus on security – including collaboration with the U.S. government’s National Security Agency (NSA) on Security-Enhanced Linux (SELinux) – paved the way for security-conscious governments and businesses around the globe to adopt Red Hat Enterprise Linux.
Read the full post »
Posted by Maria Gallegos on February 13, 2014
By, Randy Russell, Director of Certification, Red Hat
We are pleased to announce the continued evolution of Red Hat’s training and certification programs in support of Red Hat Enterprise Linux OpenStack Platform, which delivers Red Hat OpenStack technology optimized for and integrated with Red Hat Enterprise Linux. This week we are announcing the expansion of our core system administration course on, Red Hat OpenStack Administration, from three days to four so we can drill deeper into this emerging technology. We have re-titled the Red Hat Certificate of Expertise in Infrastructure-as-a-Service to Red Hat Certified System Administrator in Red Hat OpenStack. We want to make sure IT professionals worldwide understand what we are certifying with this important new credential. In coming months we plan to add to our OpenStack course and exam offerings. If you are attending Red Hat Summit, please consider one of the training events we will be offering there.
Read the full post »
Posted by Maria Gallegos on February 11, 2014
By Flavio Percoco, Software Engineer, Red Hat
As many of you know, OpenStack is a fully distributed system. As such, it keeps its services (nova, glance, cinder, keystone, etc ) as decoupled as possible and tries to stick to most of the distribution paradigms, deployments strategies and architectures. For example, one of the main tenets throughout OpenStack is that every module should be using Shared Nothing Architecture (SNA) which states. that each node should be independent and self-sufficient. In other words, all nodes in a SNA are completely isolated from each other in terms of space and memory.
There are other distribution principles that are part of OpenStack’s tenets, however, this post is not about what principles OpenStack as a whole tries to follow, but rather on how OpenStack sticks together such a heavily distributed architecture and makes it work as one. The first thing we need to do is evaluate some of the integration methods that exist out there and how they’re being used within OpenStack. Before we get there, let me explain what an integration method is.
Read the full post »
Posted by Maria Gallegos on February 6, 2014