As the OpenStack market continues to mature, some organizations have made the move and put OpenStack projects into production. They have done this in a variety of ways for a variety of reasons. However, other organizations have waited to see what these first-movers are doing with it and whether or not they are successful before exploring for themselves.
As such, we’re pleased to announce the availability of 4 new analyst white papers from 451 Research on how organizations are using OpenStack in production. The information in these papers is based on 451 Research’s own insights as well as interviews with customers who have put OpenStack into production.
Continue reading “OpenStack Use Cases – New Analyst Papers and Webinar Now Available”
In OpenStack jargon, an Instance is a Virtual Machine, the guest workload. It boots from an operating system image, and it is configured with a certain amount of CPU, RAM and disk space, amongst other parameters such as networking or security settings.
In this blog post kindly contributed by Marko Myllynen we’ll explore nine configuration and optimization options that will help you achieve the required performance, reliability and security that you need for your workloads.
Some of the optimizations can be done inside a guest regardless of what has the OpenStack Cloud Administrator enabled in your cloud. However, more advanced options require prior enablement and, possibly, special host capabilities. This means many of the options described here will depend on how the Administrator configured the cloud, or may not be available for some tenants as they are reserved for certain groups. More information about this subject can be found on the Red Hat Documentation Portal and its comprehensive guide on OpenStack Image Service. Similarly, the upstream OpenStack documentation has some extra guidelines available.
The following configurations should be evaluated for any VM running on any OpenStack environment. These changes have no side-effects and are typically safe to enable even if unused
Continue reading “9 tips to properly configure your OpenStack Instance”
Our excellent Training & Certification team has posted some videos in our RedHatCloud youtube channel that quickly go over the installation procedure of Red Hat OpenStack Platform 8, and how to boot a CloudForms instance to perform basic management functions. Kudos to our awesome video team (Jim Meegan and Ben Oliver) and to our curriculum architect (Forrest Taylor).
These videos were first developed as guided demonstrations for use in our Red Hat OpenStack Administration II (CL210) and Red Hat CloudForms Hybrid Cloud Management (CL220) courses. Now they are available for you to view for free. Remember that we also offer a free introductory course, the CL010 Red Hat OpenStack Technical Overview, to get a taste of our courses.
Continue reading “6 videos on how to install Red Hat OpenStack Platform and CloudForms”
More than 5,200 OpenStack professionals and enthusiasts gathered in Barcelona, Spain to attend the 2016 OpenStack Summit. From the keynotes to the break-out sessions to the marketplace to the evening events and the project work sessions on Friday, there was plenty to keep attendees busy throughout the week. In fact, if you were one of the lucky ones who attended OpenStack Summit, there was probably many sessions and activities you wanted to make it to but couldn’t.
Red Hat was very busy throughout the week as well, as we participated in 49 sessions, staffed a booth in the marketplace with five demo stations, announced several new and exciting customers, hosted and co-hosted evening events throughout the week, and held hands-on, intensive training through OpenStack Academy. So if you weren’t able to make it to every Red Hat session, or couldn’t go to the Summit at all, here is a recap of everything we did.
Continue reading “Recapping OpenStack Summit Barcelona”
Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.
In this blog we’ll cover the many use-cases for Ansible, the most popular automation software, with OpenStack, the most popular cloud infrastructure software. We’ll help you understand here how and why you should use Ansible to make your life easier, in what we like to call Full-Stack Automation.
Continue reading “Full Stack Automation with Ansible and OpenStack”
This Fall’s 2016 OpenStack Summit in Barcelona, Spain is gearing up to be a fulfilling event. After some challenging issues with the voting system (which prevented direct URLs to each session), the Foundation has posted the final session agenda detailing the entire week’s schedule of events. Once again, I am thrilled to see the voting results of the greater community with Red Hat sharing over 40 sessions of technology overview and deep dive’s around OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much more.
As a Premiere sponsor this Fall, Red Hat also has a full day breakout room, where we plan to share additional product and strategy sessions. To learn more about Red Hat’s general accepted sessions, have a look at the details below. We’ll add the agenda details of our breakout soon! Also, be sure to visit us at our Marketplace booth to meet the team and check out one of our live demonstrations. The Marketplace kicks off on Monday evening during the booth crawl, 5:00 – 7:00pm. Finally, we’ll have several Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven’t registered yet, visit our landing page for a discounted registration code to help get you to the event. We look forward to seeing you all again in Spain this October!
For more details on each session, click on the title below:
Continue reading “Red Hat Confirms Over 40+ Accepted Sessions at OpenStack Summit Barcelona”
When I think about open source software, Red Hat is first name that comes to mind. At Tesora, we’ve been working to make our Database as a Service Platform available to Red Hat OpenStack Platform users, and now it is a Red Hat certified solution. Officially collaborating with Red Hat in the context of OpenStack, one of the fastest growing open source projects ever, is a tremendous opportunity.
This week, we announced that Red Hat has certified the Tesora Database as a Service (DBaaS) Platform on Red Hat OpenStack Platform. Mutual customers can operate database as a service with 15 different database types knowing that they have been extensively tested in the Red Hat environment. They also have the confidence of knowing that their database software is running on Red Hat Enterprise Linux (RHEL) in an environment that is supported by Red Hat.
Continue reading “Thoughts on Red Hat OpenStack Platform and certification of Tesora Database as a Service Platform”
In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.
The major benefit of utilising these existing API’s with Director is that they’re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it’s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.
With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that’s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.
Continue reading “TripleO (Director) Components in Detail”
Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed.
The Red Hat recommended architecture for fully operational OpenStack clouds include predefined and configurable roles that are robust, resilient, ready to scale, and capable of integrating with a wide variety of existing 3rd party technologies. We do this with by leveraging the logic embedded in Red Hat OpenStack Platform Director (based on the upstream TripleO project).
With Director, you’ll use OpenStack language to create a truly Software Defined Data Center. You’ll use Ironic drivers for your initial bootstrapping of servers, and Neutron networking to define management IPs and provisioning networks. You will use Heat to document the setup of your server room, and Nova to monitor the status of your control nodes. Because Director comes with pre-defined scenarios optimized from our 20 years of Linux know-how and best practices, you will also learn how OpenStack is configured out of the box for scalability, performance, and resilience.
Why do kids in primary school learn multiplication tables when we all have calculators? Why should you learn how to use OpenStack in order to install OpenStack? Mastering these pieces is a good thing for your IT department and your own career, because they provide a solid foundation for your organization’s path to a Software Defined Data Center. Eventually, you’ll have all your Data Center configuration in text files stored on a Git repository or on a USB drive that you can easily replicate within another data center.
In a series of coming blog posts, we’ll explain how Director has been built to accommodate the business requirements and the challenges of deploying OpenStack and its long-term management. If you are really impatient, remember that we publish all of our documentation in the Red Hat OpenStack Platform documentation portal (link to version 8).
Continue reading “Introduction to Red Hat OpenStack Platform Director”
Written by Jiri Benc, Senior Software Engineer, Networking Services, Linux kernel, and Open vSwitch
By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.
It goes without question that in the modern world, we need firewalling to protect machines from hostile environments. Any non-trivial firewalling requires you keep track of the connections to and from the machine. This is called “stateful firewalling”. Indeed, even such basic rule as “don’t allow machines from the Internet to connect to the machine while allowing the machine itself to connect to servers on the Internet” requires stateful firewall. This applies also to virtual machines. And obviously, any serious cloud platform needs such protection.
Continue reading “How connection tracking in Open vSwitch helps OpenStack performance”