The path to Hybrid and Cloud Native Applications

In the last six to ten years, global cloud-based companies such as Facebook and Uber haveChange of seasons from winter to summer disrupted the world by reinventing software agility and business time to market.
Traditional companies are challenged to rethink their whole application architecture strategy and find new ways to compete and grow or just die slowly.

We have seen this in lots of industries, such as PayPal for banking and Uber and Tesla for the private and public car industries.
While visiting and brainstorming with Enterprise customers, I found out that most of them have two approaches for next – or third – generation applications.

The first approach, which is the one most influenced by the PAAS and other forms of cloud-native tooling, is to build a “Green Field” approach.
It would involve creating a new third-generation app with PAAS tools such as Pivotal Cloud Foundry or Redhat OpenShift or container orchestration engine systems such as Kubernetes, Swarm or Mesosphere.

The second approach is to modernize current applications.
With this approach, most customers usually take part of the application (i.e., the front end), build it as a 12-factor containerized app, and then slowly move all app components to be cloud-native; we call this “the hybrid app” approach.

While looking at this from an application perspective would seem obvious, I challenge this customer by asking a question: Where are you going to run these applications?

A lot of customers would answer: It’s a native cloud app, so it should run in the cloud.cloud_
Yes, that’s the main idea, but what kind of cloud, private, public, hybrid?
What about the organization’s policy for computing, network, security, capacity management, etc.?

It’s ok that the initial approach for cloud-native apps would be “I don’t care about the infrastructure,” but that works mostly when you have one big app that needs to scale globally (Facebook, Uber, PayPal).
When dealing with Enterprize applications, it’s a bit different.
In most of the Enterprize data centers, the app policies for computing, capacity, network, and security differ from app to app and need to be implemented accordingly.
While developing apps that are agile and scalable, we adjust the infrastructure strategy to comply with these new characteristics.

Let’s think about the second challenge first, as it is the most common and easy to implement for most companies.
So we have a monolithic app that has many components, but all of them need to be compiled and managed together through the development cycle (dev, test, QA, staging, production).
To move from stage to stage, we need to wait for all the code to be ready, which leads to long build cycles of weeks and months.

The application architect decides to take the application’s front end and modernize it to be cloud-native so that more code can change in a small period of time without changing the entire app.
The developers build a container host system (a couple of Linux machines that have docker hosts on them) and ask the operation team to manage them and keep them available and maintained according to the whole app policy.
The operations team then takes this Linux machine as VM’a and puts it in the backup/ha/monitoring policies.

Here is problem number one: in most organizations, the current CMP tools that manage the private cloud are not aware of the container architecture inside the operating system and treats these OSes as standard Linux machines.
The responsibility to maintain the container host goes now to developers, and they consequently become part of the operation team.

Problem number two is that even the developers don’t have the tools to put this containerized application into the organization policy without writing a lot of scripts and workflow and implementing third-party open source tools.

From a business perspective, it puts the whole application strategy at a significant risk and sometimes even kills the project before it starts (regulations, security, etc..).

Now let’s deal with the first approach of building a “greenfield” environment for new cloud-native apps.
It seems that this method can solve all the issues mentioned above.
We can do it directly in the public cloud or implement PAAS or Container orchestration engines to manage the integrity, performance, and scalability of our applications.
That is correct for most public cloud initiatives; we can consume these services directly from public cloud providers like Google, Amazon, and even IBM.

But what if we want to have a multi- or hybrid-cloud strategy.
When we start to use one provider’s set of tools, it’s very hard to move somewhere else, and the cost/value can rise to a point where the business will consider the whole approach too expensive.
Something I have heard from many customers is, “What if we could bring a public cloud experience to our premises.”
What it actually means: we want an API-First, highly scalable, cloud-native, containerized infrastructure on which we could build and run our third-platform apps, just as we would do in the public cloud.

When we are speaking of API-First, OpenStack becomes a topic for discussion.
That’s right, OpenStack is an API-First infrastructure as a service platform, but does it fulfill the cloud-native story?

Yes, you could run PCF or Kubernetes on OpenStack but isn’t it becomes then just another IAAS management system like all others?

The answer is that we need a system that can manage the application integrity very close to the infrastructure, security, performance, and capacity management just like we are doing in public clouds.

Even then, we really don’t care what’s going on in the infrastructure layer because it reacts as an extension of the application, dynamically growing and shrinking with it and implementing the same compute, network, security, and storage policies all across the stack, from the app to the physical server.

For more information on running Cloud Native Apps in production visit:


Aviv Waiss is a Professional Consultant in Virtualization Cloud Native CTOA-Logo smallApp and Cloud Management. Covering the  MEDI region as a Principal systems engineer for the last eight years in VMware, Aviv leads Technology and Business opportunities with hands-on experience in many aspects such as sales, system engineering marketing, and partnership. Having Deep
customer relationships and management skills for more the 15 years Aviv Architect Global consolidation and cloud projects.

Cloud Service Factory, Fast and Agile delivery of cloud services

What makes a successful cloud project? Picture1

Is it a new and advanced architecture, superb orchestration engine, cloud portal or is it a the that manages all cloud elements? All the factors mentioned above are incredibly important but the thing that makes a cloud successful is the ability to provide valuable services quickly and manage them

How to catch up consumer’s needs.

Picture2In today’s world, developers of the fastest and most agile platform that will enable them to write test and push code as easy and quickly as possible. Enterprise developers are some of the most frequent users of today’s enterprise cloud implementations. Cloud Administrators chose developers as the first citizens of the cloud, because they are eager to be fast and agile, but here’s the dilemma: Can IT deliver services as quick and agile as the developers want them?

Implementing DevOps Culture within the IT DepartmentPicture3

It seems that for the IT department to be as fast and agile as development, they need to acquire the same development methodologies and culture. In other words: they need to adopt a method of rapid Build, Test, and Deploy for cloud services. To achieve this, we first need some functionality from our cloud solution:

  • Easy to build blueprints that represent all cloud elements including computing, network, storage, and orchestration.
  • The ability to manage blueprint as a code.
  • A framework to manage blueprints development lifecycle (Dev, Test, QA, Prod).

VMware vRealize Automation 7 converged blueprints present a new and unified modelPicture4 for automation.
While vRealize Automation 7™ already has specialized models for infrastructure, middleware, applications and anything-as-a-service, the converged blueprints are used as a component catalog of reusable building blocks for building and updating services.
vRealize Automation 7™ expresses blueprints declaratively (in YAML) adds the ability to write actions in code (i.e. scripts) or provide a graphical canvas as an alternate blueprint creation mechanism for power users

See here for an example of how to use blueprint code in vRealize Automation 7™:

Let’s create the Service Factory.

To make IT as agile as it’s developers it needs to build, test and deploy cloud services faster and reliably. IT needs to introduce new services and update existing services with new features and functionality to make its developers’ operation optimized, fast and smooth. IT Should use a 360° service definition approach that not only includes development stakeholders directly in the definition of the service to identify the features and functionality they need but other IT stakeholders as well to address, for example, security, support, pricing, and operations. Doing so allows functions such as service operations to connect with service development, meaning that we deploy capabilities to monitor and manage provisioning, availability, performance, and capacity with each service.

vRealize™ Code Stream™ Management Pack for IT DevOps The stored and versioned content can be grouped and pushed to multiple environments in one request. This functionality gives IT us the ability to implement a framework to dev, test and release services quickly and easily. IT can use a source control repository with versioning such as Git or TFS to manage our blueprint changes, then use Jenkins to create an end to end tests of our blueprints before they are released and trigger test results to decide if the service is in a release state or whether it needs improvements.

Building a framework to develop services is not enough, there is a need to declare some roles that will manage the development process. Here are some examples of roles, responsibilities, and functions:

  • Service Backlog Manager: maintain a prioritized list of service backlog items, the service backlog is prioritized by the service owner and include functional, non-functional and technical team generated requirements.
  • Service Owner: responsible for maximizing the value of a service and for managing the service backlog (i.e., determine what do we need to do).
  • Service Release and Deployment Manager: plans, schedule and controls the build, test, and deployment of service releases and delivering new functionality while protecting the integrity existing services.
  • Scrum Team: A self-organizing cross-functional team optimally compromised of seven ± two people that use the *Scrum framework to deliver services interactively and incrementally maximizing opportunities for feedback, the scrum team consist of a what_is_scrum (1)product owner, the development team and a scrum master.
  • Scrum Master: An individual who provides process leadership for *Scrum (i.e., ensures Scrum practices are understood and followed) and who support the Scrum team by removing impediments.

what_is_a_cloud_service (1)


There is more functionality in a cloud management platform such as Service Design, Service Catalog Management, Service Improvement Plan, and Service Knowledge Management Systems. These practices together with the mentioned tools and services will give a Cloud IT organization the ability to evolve into the “Ops” part of an organizational DevOps culture and to build a real Fast and Agile “Cloud Service Factory” that will utilize the most out of the SDDC Cloud Infrastructure.



Aviv Waiss is a Professional Consultant in Virtualization Cloud Native CTOA-Logo smallApp and Cloud Management. Covering the  MEDI region as a Principal systems engineer for the last eight years in VMware, Aviv leads Technology and Business opportunities with hands-on experience in many aspects such as sales, system engineering marketing, and partnership. Having Deep
customer relationships and management skills for more the 15 years Aviv Architect Global consolidation and cloud projects.

Meet The Developers – Cloud Services Release Management


So we finished another successful Meet The Developers event!

This time the focus was on Cloud Services Release Management.
We had more than 30 enterprise customers collaborating and sharing information.
In the first session, I presented a CMBU introduction followed up with a CMP roadmap.

But the main session on Cloud Services Release Manager (my name for project Houdini) was a blast.
Steve Morris from the solutions engineering team gave an overview and an impressive demo on the product and answer lots of questions that were raised up by the customers.

We had lots of great discussions with customers, one of the most interesting things we found out is that most customers develop their cloud services directly on the production environment!!

I think that after this workshop all f them understood the importance of having dev, test and prod environment and a set of processes and tools to manage the development cycle.

Here are some pictures I captured from the event:

Thank you, Steve, Tom, and Darren for helping me put this together!

See you all in the next session…


Building infrastructure for the 3rd platform apps – top down or bottom up approach ?


World enterprises are struggling these days with a big question: how to move forward to the next generation 3rd platform applications.
In some of the organization the transition starts from a business need, and in others, the development teams are pushing for the change.

While most of the development departments within today’s organizations are already starting to adopt the new 3rd platform development tools, the IT departments find themselves in a strange situation.

The developers are starting to take infrastructure decision and are sketching a new IT horizon.

The apps determine how the infrastructure will look like and actually taking the “We don’t care” approach asking for big “White Boxes” to carry their application loads while saying: “We will take care of everything.”

This Facebook, Google and Amazon approach is right when it comes to large organizations that develop mass scale applications but mostly does not fit the typical enterprise that has limited development and IT teams.

One of the most common approaches for today’s 3rd platform apps is using software containers to build a microservice application.
While software containers are an excellent way to package and ship applications without the need of a complex infrastructure to rely on, most of the container’s management systems are focusing on placement, shared API, and process management and still depend on a general purpose O/S to run the containers loads.

This general purpose O/S usually known as the “Container Host” is the place where all containers run as separated processes.

Some companies had created a stripped down O/S  that has only the basic functionality of running containers, among this solutions are VMware’s Photon O/S, Tectonic Core OS, Project Atomic (sponsored by Redhat), Ubuntu core and Microsoft’s Nano Server.

So when going back to the traditional enterprise dilemma, there are the two ways of deploying containers in an organization.

“Top Down” approach: The most common used today and developer-centric, basically it gives a container API to the developers and sprawl container hosts on physical or virtual servers leaving the developers to maintain the Container Host O/S.



“Bottom Up” approach: A new approach that distributes the responsibility and sponsorship between the developers and the IT departments empowering the developers with architecting the app and the IT with building a dedicated Container infrastructure platform aligned with company policy and share it’s API back to the developers.



There is no right or wrong here!

The top-down approach fits mostly large corporates that need to build a mass scale app to serve billions of users (Facebook, google, amazon) and usually create their own container host flavor and tools to deploy and maintain it.

The bottom-up approach fits organization that needs to adopt containers as a part of a wider team strategy and still needs to maintain company IT policy.
This company usually rely on a standard solution that have a known architecture and full support from the supplier-vendor.

Taking into consideration the virtualization revolution that created a new “Data Center Operating System” to minimize the dependency in the general purpose O/S, we can use the same architecture to help enterprise organization in the transition from the 2nd to the 3rd platform.


The first step will be to run containers side by side along with the 2nd generation applications.
Most organization will develop their mobile and internet apps using containers while continue to run the primary and backend applications on 2nd platform solutions.
Doing so it’s crucial for this organization to have a platform that can host 2nd platform apps (monolithic) side by side with 3rd platform apps (microservices)

micro services

VMware’s vSphere Integrated Containers will fill the gap in allowing these two technologies work together on the current most adopted Data Center Operating System known as vSphere:



While containers technology and microservices architecture adoption increase in the organization the need to develop a native but trusted platform to run containers arise.

With this to take into consideration, the new bottom app approach architecture will be the most suitable for the enterprise to adopt.

VMware’s Photon Platform is the first enterprise-ready solution based on an industry proof micro-visor and controller utilizing all the experience and knowledge VMware gathered for the last 15 years running enterprise production loads at scale.

photon platform.png

Short for micro-hypervisor.

micro-visor works with the VT (Virtualization Technology) features built into Intel, AMD and other CPUs to create hardware-isolated micro virtual machines (micro-vas) for each task performed by a user that utilizes data originating from an unknown source.
The micro-VMs created by the micro-visor provide a secure environment, isolating user tasks from other tasks, applications, and other systems on the network. Tasks, in this case, entail the computation that takes place within an application as well as within the system kernel, so the micro-visor ensures security at both the application and operating system kernel levels.

Utilizing VMware’s CMP (Cloud Management Platform) NSX and vSAN technologies will assure a production ready containers infrastructure platform that can be managed by the IT systems with proven and known tools while giving the developers the best API access to industry standard containers development systems.

To better understand how this solution help Organization IT to evolve, watch my Cloud Native Apps Demystified Presentation.



Aviv Waiss is a Principal Systems Engineer at VMware.
Cloud Management Platform and Cloud Native Apps Specialist.
Member of the CTO Ambassador Program