Building infrastructure for the 3rd platform apps – top down or bottom up approach ?

butd-2

World enterprises are struggling these days with a big question: how to move forward to the next generation 3rd platform applications.
In some of the organization the transition starts from a business need, and in others, the development teams are pushing for the change.

While most of the development departments within today’s organizations are already starting to adopt the new 3rd platform development tools, the IT departments find themselves in a strange situation.

The developers are starting to take infrastructure decision and are sketching a new IT horizon.

The apps determine how the infrastructure will look like and actually taking the “We don’t care” approach asking for big “White Boxes” to carry their application loads while saying: “We will take care of everything.”

This Facebook, Google and Amazon approach is right when it comes to large organizations that develop mass scale applications but mostly does not fit the typical enterprise that has limited development and IT teams.

One of the most common approaches for today’s 3rd platform apps is using software containers to build a microservice application.
While software containers are an excellent way to package and ship applications without the need of a complex infrastructure to rely on, most of the container’s management systems are focusing on placement, shared API, and process management and still depend on a general purpose O/S to run the containers loads.

This general purpose O/S usually known as the “Container Host” is the place where all containers run as separated processes.

Some companies had created a stripped down O/S  that has only the basic functionality of running containers, among this solutions are VMware’s Photon O/S, Tectonic Core OS, Project Atomic (sponsored by Redhat), Ubuntu core and Microsoft’s Nano Server.

So when going back to the traditional enterprise dilemma, there are the two ways of deploying containers in an organization.

“Top Down” approach: The most common used today and developer-centric, basically it gives a container API to the developers and sprawl container hosts on physical or virtual servers leaving the developers to maintain the Container Host O/S.

hard

 

“Bottom Up” approach: A new approach that distributes the responsibility and sponsorship between the developers and the IT departments empowering the developers with architecting the app and the IT with building a dedicated Container infrastructure platform aligned with company policy and share it’s API back to the developers.

hard

 

There is no right or wrong here!

The top-down approach fits mostly large corporates that need to build a mass scale app to serve billions of users (Facebook, google, amazon) and usually create their own container host flavor and tools to deploy and maintain it.

The bottom-up approach fits organization that needs to adopt containers as a part of a wider team strategy and still needs to maintain company IT policy.
This company usually rely on a standard solution that have a known architecture and full support from the supplier-vendor.

Taking into consideration the virtualization revolution that created a new “Data Center Operating System” to minimize the dependency in the general purpose O/S, we can use the same architecture to help enterprise organization in the transition from the 2nd to the 3rd platform.

grren_brown.png

The first step will be to run containers side by side along with the 2nd generation applications.
Most organization will develop their mobile and internet apps using containers while continue to run the primary and backend applications on 2nd platform solutions.
Doing so it’s crucial for this organization to have a platform that can host 2nd platform apps (monolithic) side by side with 3rd platform apps (microservices)

micro services

VMware’s vSphere Integrated Containers will fill the gap in allowing these two technologies work together on the current most adopted Data Center Operating System known as vSphere:

RUNDOCKER

 

While containers technology and microservices architecture adoption increase in the organization the need to develop a native but trusted platform to run containers arise.

With this to take into consideration, the new bottom app approach architecture will be the most suitable for the enterprise to adopt.

VMware’s Photon Platform is the first enterprise-ready solution based on an industry proof micro-visor and controller utilizing all the experience and knowledge VMware gathered for the last 15 years running enterprise production loads at scale.

photon platform.png

Short for micro-hypervisor.

micro-visor works with the VT (Virtualization Technology) features built into Intel, AMD and other CPUs to create hardware-isolated micro virtual machines (micro-vas) for each task performed by a user that utilizes data originating from an unknown source.
The micro-VMs created by the micro-visor provide a secure environment, isolating user tasks from other tasks, applications, and other systems on the network. Tasks, in this case, entail the computation that takes place within an application as well as within the system kernel, so the micro-visor ensures security at both the application and operating system kernel levels.

Utilizing VMware’s CMP (Cloud Management Platform) NSX and vSAN technologies will assure a production ready containers infrastructure platform that can be managed by the IT systems with proven and known tools while giving the developers the best API access to industry standard containers development systems.

To better understand how this solution help Organization IT to evolve, watch my Cloud Native Apps Demystified Presentation.

aviv-business-tp

 

Aviv Waiss is a Principal Systems Engineer at VMware.
Cloud Management Platform and Cloud Native Apps Specialist.
Member of the CTO Ambassador Program

 

Meet The Developer – EPOPS Agent

vrcs_logo

Picture1

vrops-256

 

As a Part of the collaboration between VMware R&D organization and our customers, we held a unique event where our top Israeli customers met the EPops Agent development team, heard about the technology behind the solution and shared with us their perception and ideas.

 

We got great interaction with the customers, lots of valuable feedback and understanding they’re requirement and challenges, excellent off session conversation about running and future projects.

Here are the event agenda and presenters

Slide2

Some pictures from the session…

Aviv Present the Agenda and logistics

IMG_0650

IMG_0635 2

Hilik Present the Israeli R&D Center.

IMG_0636 2

Ehud present vRops value and mission.

IMG_0645

Noam Present vRops architecture

IMG_0648

Yoav Demo Agent install & O/S monitoring

IMG_0653

IMG_0654

Having fun in the new training center.

IMG_0652

Ehud Demo vCenter and SQL new applications

FullSizeRender

Dan explains how to develop your own solution using EPops agent.

IMG_0661

Plugin development in action !!! Dan simplified new solution development.

IMG_0662

And finally Ronit present product roadmap and future directions

FullSizeRender 2

Thanks to everyone who contributed, submitted and participated in the event!!!

Links to the event presentations:

Session1 – Hilik

Intro to EP ops value – Ehud

EP Ops Arch Overview – Noam

Build your own plugin – Dan

Ronit’s roadmap presentation can be presented one on one to NDA customers.

 

vRealize Code Stream – Pipeline As A Service

logo

Introduction:

vRealize Code Stream pipelines can be executed in couple of different ways depending on the customer scenario.
One option is to use the execute button in the pipelines screen, this will execute the proper pipeline.
Another option is using the rest api to execute a specific pipeline.
There are some sample scripts and code in vRCS documentation that explain how to interact with vRCS Rest API.
In this post i will show how to use vRealize Orchestrator HTTP-REST plug-in to execute a vRCS pipeline.
This is very useful in a use case where we need outbound developers or DevOps engineers to trigger pipelines based on specific roles and permission.
It can also help where we are using vRO API as a central API for cloud and SDDC and need to trigger multiple actions in multiple products (vRA, vRCS) from external apps.

blog1

workflowStep1 -vRealize Orchestrator – Add A REST host

In the first step we need to configure vRCS as our REST host:

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST host workflow and fill up your vRCS host’s details as follows:
  1. add_rest_host-1
  2. add_rest_host-2
  3. add_rest_host-3
  4. add_rest_host-4
  1. in the last screen enter your tenant username and password to complete the process.

workflowStep2vRealize Orchestrator – Add A REST operations

Now we need to add some rest activities that will define our rest calls to vRealize Code Stream API.

REST Operation 1 – get vRealize Code Stream authentication token.

In this call we will authenticate against Code Stream and get the Authentication token, it will be used later in other API calls.
*The token is good for 24H

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST operation workflow and fill up the details as follows:

URL: /identity/API/tokens

  1. add_rest_operation-get token

REST Operation 2 – get vRealize Code Stream pipeline list.

In this call, we will use Code Stream API to get the pipeline list.
We will use this list later to choose the right pipeline to execute.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/API/release-pipelines

  1.  add_rest_operation-get-pipline list

REST Operation 3 – get vRealize Code Stream pipeline information.

In this call, we will use Code Stream API to get a particular pipeline information.
We will need to supply the pipeline name.
We will use this later to get the pipeline running status.

The information that comes out from this call is in JSON format.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/api/release-pipelines?name={pipeline_name} 

  1. add_rest_operation-get pipeline info

REST Operation 4 – execute vRealize Code Stream pipeline.

In this call, we will use Code Stream API to run a pipeline.
We will need to supply the pipeline ID.

The information that comes out from this call is in JSON format.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/api/release-pipelines/{pipeline_id}/executions 

  1. add_rest_operation-execute pipeline

workflowStep3 – vRealize Orchestrator – Build vRO
workflow to run a pipeline.

You can just create a vRO workflow from each REST operation by running the “Generate a new workflow from rest process” workflow.
This will create a simple workflow that will let you input the right parameters and execute the particular RSET call.

* be sure to add the following line to your script to make sure you first authenticated to Code Stream before you run the REST operation (where the token is a string variable you need to pass with the token obtained in step 2 operation1):

request.setHeader(“Authorization”,”Bearer”+” “+token);

to make life easier I have created 3 vRO action to represent the REST operations and a sample workflow to run them.
here is the instruction on how to use it:

workflowImport the vRO Code Stream package

  1. Download my vRO Code Stream Package: com.vmware.codestream.package
  2. Go to vRO main screen and press the “import package” button.
  3. Choose the place to add the package.
  4. Find the “Run Pipeline” workflow.
  5. It will look like this:run_pipeline_schema
  6. Edit the workflow and go to the General TAB to see the workflow attributes.
  7. Edit the following attributes:
    1. username – your Code Stream username (user@domain).
    2. password – your Code Stream password.
    3. tenant – your Code Stream tenant.
    4. tokenRest – link to the get token REST operation.
    5. listREst – link to the get pipeline list REST service.
    6. getPipelineDetails – link to the get pipelines REST service.
    7. executePipeline – link to the Execute Pipeline REST operation.
    8. Save the “Run Pipeline” workflow.
  8. Run the pipeline.
  9. you will get a screen like this:
    run pipeline
  10. Choose the pipeline you want to run.
    run pipeline2
  11. if you want to supply pipeline parameters enter them in JSON format in the content section.
  12. Press Submit to run the pipeline.
  13. The vCO workflow will execute the pipeline and wait until it’s completed.

*Pleas note I haven’t implemented any exception codes in the workflow so if you need to run this in production some more work will be required…

 vRA Step4 – Create a vRealize Automation Advance Service to run pipelines

  1. Open vRA and go to the Advanced Services TAB.
  2. Press Add and select your workflow from the list.
  3. Press next until the wizard finish.
  4. Go to  Administration TAB and select Catalog Items.
  5. Press the Code Stream workflow you just added.
  6. Check a particular Service to insert the workflows too.
  7. Add a Code Stream logo icon to the service: vrcs_logo
  8. Press update.
    *You have to set the right entitlements to see the Code Stream workflow in your catalog, refer to vRA documentation if you don’t know how to do that…
  9. Go to vRA catalog.
  10. You will see a new catalog item like this:
    catalog_iteam-run pipeline
  1. Request the catalog item.
  2. You will get a screen like this:
    catalog_iteam-new request
  3. Press the checklist near the pipeline_name, and you will get the list of pipelines from your Code Stream server.
    catalog_iteam-new request list
  4. Choose your workflow to run, if needed add some parameters in the content section and press submit. (parameters need to be in JSON format{“parameter_name”:”parameter_value”})
  5. A new request will be initiated in vRA, the request will stay open until the Code Stream pipeline is completed.
    request


This is Just an example on how to use the vRO HTTP-REST to execute Code Stream pipelines.
To get the full list of Code Stream API functions and capabilities go to:
https://<vRCS-SERVER-FQDN>/release-management-service/API/docs/

Latest announsments from VMworld US 2014

Here is a summary of what was announced in San Francisco 2014…

EVO-RAIL:
Previously called Marvin in the press, this Hyper-converged infrastructure appliance is designed by VMware and built/sold by hardware vendors providing EVO: RAIL compatible hardware with VMware software on top. Purpose: SMB customers, ready in 15 min, scalable and complete in 1 SKU Competition: Nutanix, Simplivity, Scale Computing, Maxta,…EVO: RAIL Software included: vSphere Enterprise Plus & ESXi, vCenter Server, VMware VSAN, EVO: RAIL management GUI & vCenter Log InsightHardware specifications:

  • 2U 4-nodes hardware platform optimized for EVO:RAIL and provided by selected OEM partners.
  • Dual CPU sockets.
  • Memory: Up to 192 Gb
  • 16 Tb of storage on VSAN (HDDs & Flash)
  • Automated Scale-Out up to 4 nodes (HCIAs), that can support up to 400 server VMs or 1 000 VDI VMs

1st OEMs announcing EVO:RAIL HCIA: EMC, Fujitsu, Dell & SuperMicro

vCloud Suite v5.8:

  • Improved business continuity & disaster recovery.
  • improved SRM integration with vCAC, SRM can be now offered as a Service in vCAC’s self-service portal

vCloud Automation Center 6.1 is announced

  • Enhanced next-gen apps, such as Big Data Extensions for Hadoop 2.
  • Improved interoperability with NSX.
  • New proactive support: free Support Assistant vCenter plug-in.

vSphere 6.0: (beta)

  • Fault Tolerance will support 4 CPUs
  • Cross vCenter vMotion support
  • Long distance vMotion is enhanced
  • Using NSX, network properties now be kept on long distance vMotion
  • Openstack VIO (VMware Integrated Openstack, Beta) A standard Openstack in virtual appliance (OVA) that makes it easy for IT to run an enterprise-grade OpenStack on top of their existing VMware infrastructure.

VMware vRealizeTM:
vCenter Management family products are renamed under vRealizeTM brand
vRealizeTM Suite is a Cloud Management Platform Suite

  • vRealize Operations Insight: add-on for vSphere with Operations Management (vSOM)
  • vRealize Air: SaaS offering for vCloud Automation Center
  • vRealize Suite: Complete set of VMware management products

Rebranding examples: vCloud Automation Center (vCAC) = vRealize Automation & vCenter Orchestrator (vCO) = vRealize Orchestrator…
vRealize Suite is the next step in the evolution of VMware’s cloud management family, shifting from a product to a platform strategy

End-User Computing:

  • VMware, NVIDIA & Google agreement for Graphics-Rich Applications.
  • VMware and SAP Collaborate around Mobile Security & Mobile Apps.
  • Workspace Suite: Mobile, Desktop and Content Management unified.
  • New Horizon DaaS Services and Expansion to Europe

vCloud Air (formerly vCHS):
In addition to existing IaaS, DaaS and DRaaS offering (see vCloud AirTM OnePager), VMware announced new services:

  • DevOps as a service
  • DBaaS: MS SQL and MySQL first. Other DB platforms will follow.
  • Object based storage (based on ViPR): extremely scalable, cost
  • effective, and durable storage for unstructured data.
  • Mobility Services:
    • with AirWatch: mobility management, mobile app develop…
    • with Pivotal CF Mobile Services.
  • PaaS : based on Pivotal CF
  • Cloud Management as a Service: vRealize Cloud Air Resources:

Cloud Computing, SDDC and the hybridity concept explained

In some of the recent projects, I managed I find myself explaining to customers the basics of SDDC, cloud computing approaches and especially the difference approaches of the Private, Public and Hybrid cloud.

Public, Private, Hybrid, it all seems like seems like empty words when you are trying to attach in the real world.

Trying to understand how to solve day-to-day use cases, customers miss the evolvement the IT industry had gone trough in the last years.

Most customers today have some virtualized assets in they’re IT, lots of them trust virtualization in production and test&dev environment, But the big picture here is they have standardized they’re IT infrastructure on software than hardware.

For many years compute had been the only aspect of virtualization, today networking & security and storage are making their way in to complete the picture.

But what all of this has to do with SDDC and different cloud approaches?

For many years customers had struggled on trying to understand IT costs, how much resources to they need to run their business agiler and meet the business needs.

These tree pillars were always bottlenecked by the IT hardware architecture and mad it hard to plan or run IT like a business.

The NIST Definition of Cloud Computing Defined Cloud as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that are rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model has five essential characteristics, three service models, and four deployment models.”

Customers couldn’t create cloud environment based on the old hardware approach because of it lack the ability to create shared pools and on-demand resources.

When virtualization came along customers started to have more and more flexible, agile and on-demand environments there for we call them “Private Clouds.”

A couple of vendors took the agile on-demand data center approach and created pooled resources that customers consume over the Internet for external use or for burst resource need such as development environments etc…

We call this “Public Clouds” as they live outside of our organization and the level of connectivity between them and the customer environment is very limited.

cloud-it

Then came the “Hybrid Cloud” model, it meant to give a bridge from the private to the public cloud allowing the customers to extend they’re private clouds to the open world and move back resources from the public world back to the private.

The hybrid cloud model allowed customers for the first time to actually share resources between clouds, but how SDDC has to do with all that?

SDDC is the next generation of virtualization, it extends standard compute resources to virtual network, security, nd storage.

SDDC gives us a common infrastructure covering all IT aspects (Compute, Network, Storage) for both Private and Public cloud and enables seamless migration of loads between them.

With SDDC in place customers don’t need to “migrate” loads as the they seamlessly moved back and forth from private to public clouds and vice versa as if the were all in the same data center.

More important is that In the age of building hybrid cloud we need to consider a new term called “Hybridity.”

In cloud computing hybridity means that you can mix all kind of resources elements and technology between the Private and Public Clouds, it’s not infrastructure only anymore but also environments, applications, and policies.

hybridity give us the ability to create flexible application configuration and security policies that move together with they’re attached resource back and forth from one loud to another.

That way we can keep company policy and security guidelines when moving to a hybrid cloud approach.

Hybridity is enabled in a couple of different ways:

  • Attached to the platform / application only
  • Covers all infrastructure and application elements.

When covering only platforms / application we still need to consider a migration phase of the element between clouds, this limits our agility and ability to supply strict SLA’s

When covering all infrastructure and application elements, we get the real benefit of the hybrid cloud as all IT element are flexible and attached to every light element.

To conclude, the hybridity of all cloud features is the new key future of hybrid cloud, it’s a must for enabling a real hybrid cloud environment, and customers must look at it carefully when architecting their next generation architecture.

To contact me about this topic please send this form: