Sean is a seasoned product manager bringing over 15 years of experience in
senior engineering, global operations and services management roles in
virtualization & cloud companies. He has international experience of
storage virtualization products delivery & private clouds design for
enterprise customers in various market segments in US, Europe & APAC. Sean
is focused on cloud storage product management and strategy for Red Hat
OpenStack Platform cloud offering. Sean is a member of the OpenStack
Foundation, a frequent speaker at OpenStack summits and a regular
contributor to the Red Hat Stack blog - http://www.redhatstack.com/.
Last week we marked the general availability of our Red Hat OpenStack Platform 8 release, the latest version of Red Hat’s highly scalable IaaS platform based on the OpenStack community “Liberty” release. A co-engineered solution that integrates the proven foundation of Red Hat Enterprise Linux with Red Hat’s OpenStack technology to form a production-ready cloud platform, Red Hat OpenStack Platform is becoming a gold standard for large production OpenStack deployments. Hundreds of global production deployments and even more proof-of-concepts are underway, in the information, telecommunications, financial sectors, and large enterprises in general. Red Hat OpenStack Platform also benefits from a strong ecosystem of industry leaders for transformative network functions virtualization (NFV), software-defined networking (SDN), and more.
From Community Innovation to Enterprise Production
The path for delivering a production-ready cloud platform, starts in the open source communities that can typically innovate far more effectively than traditional R&D labs. At Red Hat we bring customers, partners, and developers into communities of purpose to solve shared problems together. Red Hat also contributes a lot of code to the OpenStack project to help drive more community development that generally results in a higher feature velocity that enterprise customers need, with a faster time to market compared to proprietary software. When useful OpenStack technology emerges, we test it, harden it, and make it more secure and reliable.
Continue reading “Meet Red Hat OpenStack Platform 8”
The OpenStack Cinder Backup service was introduced in Cinder in the Grizzly release to allow users to create backups from their volumes and store it to their Swift object storage system (still very common use case in OpenStack private clouds to date). Since then, the Backup API continued to mature with every release.
The OpenStack Backup drivers catalog have also become richer and recently added target options for NFS and POSIX, as well as Block, such as Ceph RBD backend store, notwithstanding one of the coolest evolution points was introduced in the new OpenStack “Mitaka” release: the first integration of the OpenStack Cinder Backup API with a non-OpenStack public cloud provider, Google Cloud Platform. This is allowing backup of OpenStack Private Clouds volumes to Google Cloud Platform.
Continue reading “Extending OpenStack Disaster Recovery to Google Cloud Storage”
OpenStack “Liberty,” due for imminent release, represents the 12th release of the open source computing platform for public and private clouds. Recent OpenStack releases have focused on improving stability and enhancing the operator experience. This is still the case with Liberty, but there are still new features to consider.
On October 1st we provided a sneak peek into the highlights of OpenStack Liberty, if you missed out you can now view the recording of the event on demand. As well as providing an overview the highlights of the Liberty release we also discussed the recent restructure of the way governance of OpenStack projects works, colloquially referred to as the “big tent”, and what it means for you as a consumer of OpenStack.
Continue reading “What’s new in OpenStack Liberty: webinar recap”
The OpenStack Summit will take place on October 27-30 in Tokyo, will be a five-day conference for OpenStack contributors, enterprise users, service providers, application developers and ecosystem members. Attendees can expect visionary keynote speakers, 200+ breakout sessions, hands-on workshops, collaborative design sessions and lots of networking. In keeping with the Open-Source spirit, you are in the front seat to cast your vote for the sessions that are important to you!
Today we will take a peak at some recommended storage related session proposals for the Tokyo summit, be sure to vote for your favorites! To vote, click on the session title below and you will be directed to the voting page. If you are a member of the OpenStack Foundation, just login. If you are not, you are welcome to join now – it is simple and free.
Please make sure to vote before the deadline on Thursday, July 30 2015, at 11:59pm PDT.
The new OpenStack Kilo upstream release that became available on April 30, 2015 marks a significant milestone for the Manila project for shared file system service for OpenStack with an increase in development capacity and extensive vendors adoption. This project was kicked off 3 years ago and became incubated during 2014 and now moves to the front of the stage at the upcoming OpenStack Vancouver Conference taking place this month with customer stories of Manila deployments in Enterprise and Telco environments.
The project was originally sponsored and accelerated by NetApp and Red Hat and has established a very rich community that includes code contribution fromcompanies such as EMC, Deutsche Telekom, HP, Hitachi, Huawei, IBM, Intel, Mirantis and SUSE.
The momentum of cloud shared file services is not limited to the OpenStack open source world. In fact, last month at the AWS Summit in San Francisco, Amazon announced it new Shared File Storage for Amazon EC2, The Amazon Elastic File System also known for EFS. This new storage service is an addition to the existing AWS storage portfolio, Amazon Simple Storage Service (S3) for object storage, Amazon Elastic Block Store (EBS) for block storage, and Amazon Glacier for archival, cold storage.
The Amazon EFS provides a standard file system semantics and is based on NFS v4 that allows the EC2 instances to access file system at the same time, providing a common data source for a wide variety of workloads and applications that are shared across thousands of instances. It is designed for broad range of use cases, such as Home directories, Content repositories, Development environments and big data applications. Data uploaded to EFS is automatically replicated to different availability zones, and because EFS file systems are SSD-based, there should be few latency and throughput related problems with the service. The Amazon EFS file system as a service allows users to create and configure file systems quickly with no minimum fee or setup cost, and customers pay only for the storage used by the file system based on elastic storage capacity that automatically grows and shrinks when adding and removing files on demand.
The OpenStack 10th release added ten new storage backends and improved testing on third-party storage systems. The Cinder block storage project continues to mature each cycle exposing more and more Enterprise cloud storage infrastructure functionalities.
Here is a quick overview of some of these key features.
Simplifying OpenStack Disaster Recovery with Volume Replication
After introducing a new Cinder Backup API to allow export and import backup service metadata in the Icehouse release, which allowed “electronic tape shipping” style backup-export & backup-import capabilities to recover OpenStack cloud deployments, the next step for Disaster Recovery enablement in OpenStack is the foundation of volume replication support at block level.
Anyone who is serious about big data, scale out applications and cloud infrastructure should want to intimately understand the benefits of scale out architecture and the resource elasticity of cloud services. As we continue our evolution into a deeper understanding of data, we see a need agile access to an elastic big data platform. Such a platform can allow us to capture, synthesize and quantify data into business value.
Enter OpenStack Sahara – the intersection of Hadoop and OpenStack.
As an OpenStack project started by Red Hat, Mirantis and Hortonworks during the OpenStack Havana summit in Portland, Sahara was incubated for the OpenStack Icehouse release and is expected to be integrated for OpenStack Juno by the end of 2014.
Sahara’s mission is to provide a scalable data processing stack and associated management interfaces. Sahara delivers on that mission by providing the ability to rapidly create and manage Apache Hadoop™ clusters and easily run workloads across them. All on OpenStack managed infrastructure, without having to deal with the details of cluster management.
With full cluster lifecycle management, provisioning, scaling and termination, Sahara allows the user to select different Hadoop versions, cluster topology and node hardware details.
Continue reading “Sahara: OpenStack Elastic Hadoop on Demand”
The latest OpenStack 2014.1 release introduces many important new features across the OpenStack Storage services that includes an advanced block storage Quality of Service, a new API to support Disaster Recovery between OpenStack deployments, a new advanced Multi-Locations strategy for OpenStack Image service & many improvements to authentication, replication and metadata in OpenStack Object storage.
Here is a Sneak Peek of the upcoming Icehouse release:
Block Storage (Cinder)
The Icehouse release includes a lot of quality and compatibility improvements such as improved block storage load distribution in Cinder Scheduler, replacing Simple/Chance Scheduler with FilterScheduler, advancing to the latest TaskFlow support in volume create, Cinder support for Quota delete was added, as well as support for automated FC SAN zone/access control management in Cinder for Fibre Channel volumes to reduce pre-zoning complexity in cloud orchestration and prevent unrestricted fabric access.
In my previous blog post, I have shared the vision of Disaster Recovery as a Service for OpenStack (DraaS) as an umbrella topic that describes what needs to be done to protect workloads running in an OpenStack cloud from a large scale disaster.
Last week we shared this vision in several sessions at the OpenStack summit. While OpenStack attendees were dealing with infrastructure Disaster Recovery topics in Hong Kong, the strongest tropical cyclone in recorded history “Typhoon Haiyan” also known as Typhoon Yolanda, devastated multiple coastal cities in the Philippines and took the lives of tens of thousands of people with millions evacuated. The storm destroyed complete cities, villages, airports, roads, power and communications infrastructures.
If there’s one thing that history has not only taught us, but also keeps on teaching us every year, is that catastrophic events do happen and that if we don’t invest in preventative measures now, we will pay a hefty price later.
In a time where the rules of Enterprise IT are constantly changing and every day there seems to be a new app born in the cloud, we must not forget to ask ourselves what are the challenges we face with these changes and rapid app development. What do we need to do to secure the horizon? What technology bridges are still waiting to be built in order to get us where we want to be in term of service level and securing cloud workload availability.