The path to Hybrid and Cloud Native Applications

In the last six to ten years, global cloud-based companies such as Facebook and Uber haveChange of seasons from winter to summer disrupted the world by reinventing software agility and business time to market.
Traditional companies are challenged to rethink their whole application architecture strategy and find new ways to compete and grow or just die slowly.

We have seen this in lots of industries, such as PayPal for banking and Uber and Tesla for the private and public car industries.
While visiting and brainstorming with Enterprise customers, I found out that most of them have two approaches for next – or third – generation applications.

The first approach, which is the one most influenced by the PAAS and other forms of cloud-native tooling, is to build a “Green Field” approach.
It would involve creating a new third-generation app with PAAS tools such as Pivotal Cloud Foundry or Redhat OpenShift or container orchestration engine systems such as Kubernetes, Swarm or Mesosphere.

The second approach is to modernize current applications.
With this approach, most customers usually take part of the application (i.e., the front end), build it as a 12-factor containerized app, and then slowly move all app components to be cloud-native; we call this “the hybrid app” approach.

While looking at this from an application perspective would seem obvious, I challenge this customer by asking a question: Where are you going to run these applications?

A lot of customers would answer: It’s a native cloud app, so it should run in the cloud.cloud_
Yes, that’s the main idea, but what kind of cloud, private, public, hybrid?
What about the organization’s policy for computing, network, security, capacity management, etc.?

It’s ok that the initial approach for cloud-native apps would be “I don’t care about the infrastructure,” but that works mostly when you have one big app that needs to scale globally (Facebook, Uber, PayPal).
When dealing with Enterprize applications, it’s a bit different.
In most of the Enterprize data centers, the app policies for computing, capacity, network, and security differ from app to app and need to be implemented accordingly.
While developing apps that are agile and scalable, we adjust the infrastructure strategy to comply with these new characteristics.

Let’s think about the second challenge first, as it is the most common and easy to implement for most companies.
So we have a monolithic app that has many components, but all of them need to be compiled and managed together through the development cycle (dev, test, QA, staging, production).
To move from stage to stage, we need to wait for all the code to be ready, which leads to long build cycles of weeks and months.

The application architect decides to take the application’s front end and modernize it to be cloud-native so that more code can change in a small period of time without changing the entire app.
The developers build a container host system (a couple of Linux machines that have docker hosts on them) and ask the operation team to manage them and keep them available and maintained according to the whole app policy.
The operations team then takes this Linux machine as VM’a and puts it in the backup/ha/monitoring policies.

Here is problem number one: in most organizations, the current CMP tools that manage the private cloud are not aware of the container architecture inside the operating system and treats these OSes as standard Linux machines.
The responsibility to maintain the container host goes now to developers, and they consequently become part of the operation team.

Problem number two is that even the developers don’t have the tools to put this containerized application into the organization policy without writing a lot of scripts and workflow and implementing third-party open source tools.

From a business perspective, it puts the whole application strategy at a significant risk and sometimes even kills the project before it starts (regulations, security, etc..).

Now let’s deal with the first approach of building a “greenfield” environment for new cloud-native apps.
It seems that this method can solve all the issues mentioned above.
We can do it directly in the public cloud or implement PAAS or Container orchestration engines to manage the integrity, performance, and scalability of our applications.
That is correct for most public cloud initiatives; we can consume these services directly from public cloud providers like Google, Amazon, and even IBM.

But what if we want to have a multi- or hybrid-cloud strategy.
When we start to use one provider’s set of tools, it’s very hard to move somewhere else, and the cost/value can rise to a point where the business will consider the whole approach too expensive.
Something I have heard from many customers is, “What if we could bring a public cloud experience to our premises.”
What it actually means: we want an API-First, highly scalable, cloud-native, containerized infrastructure on which we could build and run our third-platform apps, just as we would do in the public cloud.

When we are speaking of API-First, OpenStack becomes a topic for discussion.
That’s right, OpenStack is an API-First infrastructure as a service platform, but does it fulfill the cloud-native story?

Yes, you could run PCF or Kubernetes on OpenStack but isn’t it becomes then just another IAAS management system like all others?

The answer is that we need a system that can manage the application integrity very close to the infrastructure, security, performance, and capacity management just like we are doing in public clouds.

Even then, we really don’t care what’s going on in the infrastructure layer because it reacts as an extension of the application, dynamically growing and shrinking with it and implementing the same compute, network, security, and storage policies all across the stack, from the app to the physical server.

For more information on running Cloud Native Apps in production visit:


Aviv Waiss is a Professional Consultant in Virtualization Cloud Native CTOA-Logo smallApp and Cloud Management. Covering the  MEDI region as a Principal systems engineer for the last eight years in VMware, Aviv leads Technology and Business opportunities with hands-on experience in many aspects such as sales, system engineering marketing, and partnership. Having Deep
customer relationships and management skills for more the 15 years Aviv Architect Global consolidation and cloud projects.