Integrating classic IT with cloud-native

This is the fifth and final in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fifth question asked:

What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve?

Mary and Gary write that “We expect that as these next-generation environments evolve, conventional and cloud-native infrastructure and development platforms will extend support for each other. As an example, OpenStack was built as a next-generation cloud-native solution, but it is now adding support for some enterprise features.”

This is the one aspect of integration. Today, it’s useful to draw a distinction between conventional and cloud-native infrastructures in part because they often use different technologies and those technologies are changing at different rates. However, as projects/products that are important for many enterprise cloud-native deployments–such as OpenStack–mature, they’re starting to adopt features associated with enterprise virtualization and enterprise management.

Why cloud-native depends on modernization

This is the fourth in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fourth question asked:

question asked:

What about existing conventional applications and infrastructure? Is it worth the time and effort to continue to modernize and upgrade conventional systems?

In an earlier post in this series, I discussed how both the economics and disruption associated with the wholesale replacement of existing IT systems makes it infeasible under most circumstances. In their answer to this question, Mary and Gary highlight the need for these existing systems to work together with new applications. As they put it: “Much of the success of cloud-native applications will depend on how well conventional systems can integrate with modern applications and support the integration and performance requirements of cloud-native developers.”

How cloud-native needs cultural change

This is the third in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The third question asked:

How will IT management skills, tools, and processes need to change [with the introduction of cloud-native architectures]?

Mary and Gary note that the move to hybrid architectures “switches the IT operations team’s priorities from maintaining specific components to ensuring the delivery of end-to-end services measured in terms of service-level agreements (SLAs).” They also note that there’s a huge cultural element. For example, “Line-of-business stakeholders will have to partner with IT operations and development staff, either individually or as part of collaborative DevOps groups, to ensure that services are implemented as expected and that test-and-release cycles are well integrated.

Evolving IT architectures: It can be hard

This is the second in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The second question asked:

What are the typical challenges that organizations need to address as part of this evolution [to IT that at least includes a strong cloud-native component]?

In their response, Mary and Gary note the challenges associated with “having to integrate with conventional systems can slow down the entire process and work against the agile, continuous integration/continuous delivery methodologies these DevOps teams often employ.” At the same time, this integration can’t be dispensed with; they add that “IDC expects cloud-native and conventional applications to become more connected and interdependent over time.” (Check out the recent webinar discussing this and other topics: Next-generation IT strategies: Mixing conventional and cloud-native infrastructure–based on a recent IDC survey.)

So, where does that leave us? Is traditional IT destined to just be a boat anchor when it’s integrated with cloud-native IT? (And make no mistake, integration is an inevitability.)

Variations of this question also come up as part of critiques to the bimodal or two-speed IT idea.

Does cloud-native have to mean all-in?

This is the first in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The first question asked:

Cloud-native application architectures promise improved business agility and the ability to innovate more rapidly than ever before. However, many existing conventional applications will provide important business value for many years. Does an organization have to commit 100% to one architecture versus another to realize true business benefits?

As Mary and Gary write, there are indeed “cost and performance benefits of greenfield, extreme scale cloud-native applications running on highly standardized, automated infrastructure.” However, as they also note, bringing in the bulldozers to replace all existing infrastructure and applications isn’t an option for very many businesses. There’s too much investment and, even if it were an option financially, the disruption involved in wholesale replacement would likely offset any efficiency gains.

OpenStack Summit Tokyo – Final Summary

 

As I flew home from OpenStack Summit Tokyo last week, I had plenty of time to reflect on what proved to be a truly special event. OpenStack gains more and more traction and maturity with each community release and corresponding Summit, and the 11th semi-annual OpenStack Summit certainly did not disappoint. With more than 5,000 attendees, it was the largest ever OpenStack Summit outside of North America, and there were so many high quality keynotes, session, and industry announcements, I thought it made sense to put together a final trip overview, detailing all of the noteworthy news, Red Hat press releases, and more.

As always, the keynotes were much-anticipated and informative. The day 1 keynotes started with Jonathan Bryce, the Executive Director of the OpenStack Foundation provided a nice welcome speech, and an overview of what attendees could expect in the next three days. He was then followed on stage by technologists from various organizations focusing on real-world use cases, including Egle Sigler from Rackspace, and Takuya Ito from Yahoo, who shared their experience and use case with OpenStack at Yahoo Japan.

OpenStack Summit Tokyo – Day 3

Hello again from Tokyo, Japan where the third and final day of OpenStack Summit has come to a close. As with the previous days of the event, there was plenty of news, interesting sessions, great discussions on the show floor, and more. All would likely agree that the 11th OpenStack Summit was a rousing overall success!

Like day 1 and day 2 of the event, Red Hat led or co-presented in several sessions.  Starting us off today, Erwan Gallen, Red Hat’s OpenStack Technical Architect, participated in a panel and helped provide an Ambassador community report. Among other things, the group of OpenStack ambassadors introduced several improvements over the past six months, since the last OpenStack community release (Kilo), and shared many of their overall feelings and experiences about the community.

Mark McLoughlin, Red Hat’s OpenStack Technical Director, then gave an interesting talk entitled The Life and Times of an OpenStack Virtual Machine. Delving deeper than a simple, abstract narrative of initiating a Launch Instance in OpenStack’s dashboard, Mark detailed the technologies involved behind the scenes to allow for this. By the end of the session he had fully explained how OpenStack provides a running VM that a user can access via SSH.

A Container Stack for OpenStack (Part 2 of 2)

In Part 1 of this blog series, I talked about how Red Hat has been working with the open source community to build a new container stack and our commitment to bring that to OpenStack. In Part 2 I will discuss additional capabilities Red Hat is working on to build an enterprise container infrastructure and how this is forms the foundation of our containerized application platform in OpenShift.

As we discussed in the previous post, Linux, Docker, and Kubernetes form the core of Red Hat’s enterprise container infrastructure. This LDK stack integrates with OpenStack’s compute, storage and networking services to provide an infrastructure platform for running containers. In addition to these areas, there are others that we consider critical for enterprises who are building a container-based infrastructure. A few of these include:

OpenStack Summit Tokyo – Day 2

Hello again from Tokyo, Japan where the second day of OpenStack Summit has come to a close with plenty of news, interesting sessions, great discussion on the showfloor, and more.

Day two started this morning with a keynote session from Mark Collier, Chief Operating Officer of the OpenStack Foundation. Mark shared several statistics and details from the newly launched Liberty release, including the fact that Neutron had overtaken Nova as the OpenStack project with the most activity. He was followed by Kyle Mestery, the project team lead (PTL) for Neutron, who provided further details and also gave an update on Project Kuryr, a service that brings container networking to Neutron. Toshio Nishiyama, SVP of NTT Resonant, then explained how NTT uses OpenStack to power their popular Goo search engine and web portal, the third largest in Japan. Additional keynotes were delivered by Scott Crenshaw and Adrian Otto of Rackspace, Kang-Wong Lee from SK Telecom, Kentaro Sasaki and Neal Sato from Rakuten, Makato Hasegawa from CyberAgent, Inc., and Angel Diaz from IBM and Jesse Proudman from Blue Box, an IBM company who teamed to deliver the final keynote of the morning.

Proven OpenStack solutions. Simple OpenStack deployment. Powerful results.

According to an IDC global survey sponsored by Cisco of 3,643 enterprise executives responsible for IT decisions, 69% of respondents indicated that their organizations have a cloud adoption strategy in place. Of these organizations, 65% say OpenStack is an important part of their cloud strategy and had higher expectations for business improvements associated with cloud adoption.1

Organizations are looking to OpenStack to enable DevOps, add flexibility to their infrastructure, improve cost controls, avoid vendor lock-in and optimize hybrid private/public cloud deployments. One of the ways Red Hat is helping customers adopt OpenStack and achieve these goals is through our collaboration with other IT industry leaders, including Cisco. Red Hat and Cisco are helping customers implement a Fast IT. Many of our customers are interested in what OpenStack has to offer and they are looking to what Cisco and Red Hat can offer to simplify complexity, get up and running faster and build a foundation to enable scaling and High Availability (HA).