Building infrastructure for the 3rd platform apps – top down or bottom up approach ?

butd-2

World enterprises are struggling these days with a big question: how to move forward to the next generation 3rd platform applications.
In some of the organization the transition starts from a business need, and in others, the development teams are pushing for the change.

While most of the development departments within today’s organizations are already starting to adopt the new 3rd platform development tools, the IT departments find themselves in a strange situation.

The developers are starting to take infrastructure decision and are sketching a new IT horizon.

The apps determine how the infrastructure will look like and actually taking the “We don’t care” approach asking for big “White Boxes” to carry their application loads while saying: “We will take care of everything.”

This Facebook, Google and Amazon approach is right when it comes to large organizations that develop mass scale applications but mostly does not fit the typical enterprise that has limited development and IT teams.

One of the most common approaches for today’s 3rd platform apps is using software containers to build a microservice application.
While software containers are an excellent way to package and ship applications without the need of a complex infrastructure to rely on, most of the container’s management systems are focusing on placement, shared API, and process management and still depend on a general purpose O/S to run the containers loads.

This general purpose O/S usually known as the “Container Host” is the place where all containers run as separated processes.

Some companies had created a stripped down O/S  that has only the basic functionality of running containers, among this solutions are VMware’s Photon O/S, Tectonic Core OS, Project Atomic (sponsored by Redhat), Ubuntu core and Microsoft’s Nano Server.

So when going back to the traditional enterprise dilemma, there are the two ways of deploying containers in an organization.

“Top Down” approach: The most common used today and developer-centric, basically it gives a container API to the developers and sprawl container hosts on physical or virtual servers leaving the developers to maintain the Container Host O/S.

hard

 

“Bottom Up” approach: A new approach that distributes the responsibility and sponsorship between the developers and the IT departments empowering the developers with architecting the app and the IT with building a dedicated Container infrastructure platform aligned with company policy and share it’s API back to the developers.

hard

 

There is no right or wrong here!

The top-down approach fits mostly large corporates that need to build a mass scale app to serve billions of users (Facebook, google, amazon) and usually create their own container host flavor and tools to deploy and maintain it.

The bottom-up approach fits organization that needs to adopt containers as a part of a wider team strategy and still needs to maintain company IT policy.
This company usually rely on a standard solution that have a known architecture and full support from the supplier-vendor.

Taking into consideration the virtualization revolution that created a new “Data Center Operating System” to minimize the dependency in the general purpose O/S, we can use the same architecture to help enterprise organization in the transition from the 2nd to the 3rd platform.

grren_brown.png

The first step will be to run containers side by side along with the 2nd generation applications.
Most organization will develop their mobile and internet apps using containers while continue to run the primary and backend applications on 2nd platform solutions.
Doing so it’s crucial for this organization to have a platform that can host 2nd platform apps (monolithic) side by side with 3rd platform apps (microservices)

micro services

VMware’s vSphere Integrated Containers will fill the gap in allowing these two technologies work together on the current most adopted Data Center Operating System known as vSphere:

RUNDOCKER

 

While containers technology and microservices architecture adoption increase in the organization the need to develop a native but trusted platform to run containers arise.

With this to take into consideration, the new bottom app approach architecture will be the most suitable for the enterprise to adopt.

VMware’s Photon Platform is the first enterprise-ready solution based on an industry proof micro-visor and controller utilizing all the experience and knowledge VMware gathered for the last 15 years running enterprise production loads at scale.

photon platform.png

Short for micro-hypervisor.

micro-visor works with the VT (Virtualization Technology) features built into Intel, AMD and other CPUs to create hardware-isolated micro virtual machines (micro-vas) for each task performed by a user that utilizes data originating from an unknown source.
The micro-VMs created by the micro-visor provide a secure environment, isolating user tasks from other tasks, applications, and other systems on the network. Tasks, in this case, entail the computation that takes place within an application as well as within the system kernel, so the micro-visor ensures security at both the application and operating system kernel levels.

Utilizing VMware’s CMP (Cloud Management Platform) NSX and vSAN technologies will assure a production ready containers infrastructure platform that can be managed by the IT systems with proven and known tools while giving the developers the best API access to industry standard containers development systems.

To better understand how this solution help Organization IT to evolve, watch my Cloud Native Apps Demystified Presentation.

aviv-business-tp

 

Aviv Waiss is a Principal Systems Engineer at VMware.
Cloud Management Platform and Cloud Native Apps Specialist.
Member of the CTO Ambassador Program

 

Meet The Developer – EPOPS Agent

vrcs_logo

Picture1

vrops-256

 

As a Part of the collaboration between VMware R&D organization and our customers, we held a unique event where our top Israeli customers met the EPops Agent development team, heard about the technology behind the solution and shared with us their perception and ideas.

 

We got great interaction with the customers, lots of valuable feedback and understanding they’re requirement and challenges, excellent off session conversation about running and future projects.

Here are the event agenda and presenters

Slide2

Some pictures from the session…

Aviv Present the Agenda and logistics

IMG_0650

IMG_0635 2

Hilik Present the Israeli R&D Center.

IMG_0636 2

Ehud present vRops value and mission.

IMG_0645

Noam Present vRops architecture

IMG_0648

Yoav Demo Agent install & O/S monitoring

IMG_0653

IMG_0654

Having fun in the new training center.

IMG_0652

Ehud Demo vCenter and SQL new applications

FullSizeRender

Dan explains how to develop your own solution using EPops agent.

IMG_0661

Plugin development in action !!! Dan simplified new solution development.

IMG_0662

And finally Ronit present product roadmap and future directions

FullSizeRender 2

Thanks to everyone who contributed, submitted and participated in the event!!!

Links to the event presentations:

Session1 – Hilik

Intro to EP ops value – Ehud

EP Ops Arch Overview – Noam

Build your own plugin – Dan

Ronit’s roadmap presentation can be presented one on one to NDA customers.

 

DevOps in the traditional enterprise – a leap ahead​ in release pipeline management.

Software release management is a challenging process for big enterprises.
While most enterprise applications are critical to running the business, most often we see that the process of releasing and updating the application version is painful and inconsistent.
Let’s identify some of the causes of this situation:

Low focus on automation: In most organizations automation efforts are invested in the delivery of virtual machines or shelf applications while the legacy monolithic apps are being pushed and updated in old manual ways.

Large, applications, a big chunk of code: a lot of the organizational apps are built as a big pile of code where every change needs to be heavily evaluated and can have a global effect on the whole app.

Manual QA and test processes: in one of my customer visits, after I raised the question “how are you testing your software? ” I got an answer I didn’t expect: “we are using code blindly and wait for the result from the users,” no efficient and automated way to do QA got them to skip the process to meet the company goals.

Orphaned scripts and workflows: while some departments try to write their own solutions to build and test the code, without proper integration and a base pipeline management system this work is quarantined to the specific department and have a minimal effect on the whole release process.

To achieve better efficiency and control of the release process a change in approach need to be taken.

First, we need to have a central pipeline management system that can host all of our different scripts and workflows that take effect in the release process.
This system needs to be able to connect directly or via API to all the source control, artifact management software, platforms, and infrastructure that is involved in releasing software.

Then we need to build the different pipelines that will be made out of stages (test, QA, staging…) tasks (run scripts, run workflows, get the binaries..) and some gating rules to manage the process (test acceptance result, human approvals…)

And last we need to expose these pipelines as managed services to the development organization.
This is the most essential part as the ability to run any pipeline in a click of a button and be able to see it running is the game changer for developers that can now build code more often while maintaining the reliability of the company software.
Visualizing and versioning the pipeline runtimes is also a massive leap as we can now expose every part of the software lifecycle to any stakeholder in the organization that doesn’t understand complex scripts or workflows.

To conclude; DevOps thinking and strategy in a traditional enterprise can have a significant effect on the business software reliability, thus helping the organization to evolve faster and step up the pace in the fast-changing world.

Pipeline as a service is a feature of VMware Cloud Automation services.
For more info click https://cloud.vmware.com/cloud-automation-services

 

 

vRealize Code Stream – Pipeline As A Service

logo

Introduction:

vRealize Code Stream pipelines can be executed in couple of different ways depending on the customer scenario.
One option is to use the execute button in the pipelines screen, this will execute the proper pipeline.
Another option is using the rest api to execute a specific pipeline.
There are some sample scripts and code in vRCS documentation that explain how to interact with vRCS Rest API.
In this post i will show how to use vRealize Orchestrator HTTP-REST plug-in to execute a vRCS pipeline.
This is very useful in a use case where we need outbound developers or DevOps engineers to trigger pipelines based on specific roles and permission.
It can also help where we are using vRO API as a central API for cloud and SDDC and need to trigger multiple actions in multiple products (vRA, vRCS) from external apps.

blog1

workflowStep1 -vRealize Orchestrator – Add A REST host

In the first step we need to configure vRCS as our REST host:

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST host workflow and fill up your vRCS host’s details as follows:
  1. add_rest_host-1
  2. add_rest_host-2
  3. add_rest_host-3
  4. add_rest_host-4
  1. in the last screen enter your tenant username and password to complete the process.

workflowStep2vRealize Orchestrator – Add A REST operations

Now we need to add some rest activities that will define our rest calls to vRealize Code Stream API.

REST Operation 1 – get vRealize Code Stream authentication token.

In this call we will authenticate against Code Stream and get the Authentication token, it will be used later in other API calls.
*The token is good for 24H

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST operation workflow and fill up the details as follows:

URL: /identity/API/tokens

  1. add_rest_operation-get token

REST Operation 2 – get vRealize Code Stream pipeline list.

In this call, we will use Code Stream API to get the pipeline list.
We will use this list later to choose the right pipeline to execute.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/API/release-pipelines

  1.  add_rest_operation-get-pipline list

REST Operation 3 – get vRealize Code Stream pipeline information.

In this call, we will use Code Stream API to get a particular pipeline information.
We will need to supply the pipeline name.
We will use this later to get the pipeline running status.

The information that comes out from this call is in JSON format.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/api/release-pipelines?name={pipeline_name} 

  1. add_rest_operation-get pipeline info

REST Operation 4 – execute vRealize Code Stream pipeline.

In this call, we will use Code Stream API to run a pipeline.
We will need to supply the pipeline ID.

The information that comes out from this call is in JSON format.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/api/release-pipelines/{pipeline_id}/executions 

  1. add_rest_operation-execute pipeline

workflowStep3 – vRealize Orchestrator – Build vRO
workflow to run a pipeline.

You can just create a vRO workflow from each REST operation by running the “Generate a new workflow from rest process” workflow.
This will create a simple workflow that will let you input the right parameters and execute the particular RSET call.

* be sure to add the following line to your script to make sure you first authenticated to Code Stream before you run the REST operation (where the token is a string variable you need to pass with the token obtained in step 2 operation1):

request.setHeader(“Authorization”,”Bearer”+” “+token);

to make life easier I have created 3 vRO action to represent the REST operations and a sample workflow to run them.
here is the instruction on how to use it:

workflowImport the vRO Code Stream package

  1. Download my vRO Code Stream Package: com.vmware.codestream.package
  2. Go to vRO main screen and press the “import package” button.
  3. Choose the place to add the package.
  4. Find the “Run Pipeline” workflow.
  5. It will look like this:run_pipeline_schema
  6. Edit the workflow and go to the General TAB to see the workflow attributes.
  7. Edit the following attributes:
    1. username – your Code Stream username (user@domain).
    2. password – your Code Stream password.
    3. tenant – your Code Stream tenant.
    4. tokenRest – link to the get token REST operation.
    5. listREst – link to the get pipeline list REST service.
    6. getPipelineDetails – link to the get pipelines REST service.
    7. executePipeline – link to the Execute Pipeline REST operation.
    8. Save the “Run Pipeline” workflow.
  8. Run the pipeline.
  9. you will get a screen like this:
    run pipeline
  10. Choose the pipeline you want to run.
    run pipeline2
  11. if you want to supply pipeline parameters enter them in JSON format in the content section.
  12. Press Submit to run the pipeline.
  13. The vCO workflow will execute the pipeline and wait until it’s completed.

*Pleas note I haven’t implemented any exception codes in the workflow so if you need to run this in production some more work will be required…

 vRA Step4 – Create a vRealize Automation Advance Service to run pipelines

  1. Open vRA and go to the Advanced Services TAB.
  2. Press Add and select your workflow from the list.
  3. Press next until the wizard finish.
  4. Go to  Administration TAB and select Catalog Items.
  5. Press the Code Stream workflow you just added.
  6. Check a particular Service to insert the workflows too.
  7. Add a Code Stream logo icon to the service: vrcs_logo
  8. Press update.
    *You have to set the right entitlements to see the Code Stream workflow in your catalog, refer to vRA documentation if you don’t know how to do that…
  9. Go to vRA catalog.
  10. You will see a new catalog item like this:
    catalog_iteam-run pipeline
  1. Request the catalog item.
  2. You will get a screen like this:
    catalog_iteam-new request
  3. Press the checklist near the pipeline_name, and you will get the list of pipelines from your Code Stream server.
    catalog_iteam-new request list
  4. Choose your workflow to run, if needed add some parameters in the content section and press submit. (parameters need to be in JSON format{“parameter_name”:”parameter_value”})
  5. A new request will be initiated in vRA, the request will stay open until the Code Stream pipeline is completed.
    request


This is Just an example on how to use the vRO HTTP-REST to execute Code Stream pipelines.
To get the full list of Code Stream API functions and capabilities go to:
https://<vRCS-SERVER-FQDN>/release-management-service/API/docs/

Latest announsments from VMworld US 2014

Here is a summary of what was announced in San Francisco 2014…

EVO-RAIL:
Previously called Marvin in the press, this Hyper-converged infrastructure appliance is designed by VMware and built/sold by hardware vendors providing EVO: RAIL compatible hardware with VMware software on top. Purpose: SMB customers, ready in 15 min, scalable and complete in 1 SKU Competition: Nutanix, Simplivity, Scale Computing, Maxta,…EVO: RAIL Software included: vSphere Enterprise Plus & ESXi, vCenter Server, VMware VSAN, EVO: RAIL management GUI & vCenter Log InsightHardware specifications:

  • 2U 4-nodes hardware platform optimized for EVO:RAIL and provided by selected OEM partners.
  • Dual CPU sockets.
  • Memory: Up to 192 Gb
  • 16 Tb of storage on VSAN (HDDs & Flash)
  • Automated Scale-Out up to 4 nodes (HCIAs), that can support up to 400 server VMs or 1 000 VDI VMs

1st OEMs announcing EVO:RAIL HCIA: EMC, Fujitsu, Dell & SuperMicro

vCloud Suite v5.8:

  • Improved business continuity & disaster recovery.
  • improved SRM integration with vCAC, SRM can be now offered as a Service in vCAC’s self-service portal

vCloud Automation Center 6.1 is announced

  • Enhanced next-gen apps, such as Big Data Extensions for Hadoop 2.
  • Improved interoperability with NSX.
  • New proactive support: free Support Assistant vCenter plug-in.

vSphere 6.0: (beta)

  • Fault Tolerance will support 4 CPUs
  • Cross vCenter vMotion support
  • Long distance vMotion is enhanced
  • Using NSX, network properties now be kept on long distance vMotion
  • Openstack VIO (VMware Integrated Openstack, Beta) A standard Openstack in virtual appliance (OVA) that makes it easy for IT to run an enterprise-grade OpenStack on top of their existing VMware infrastructure.

VMware vRealizeTM:
vCenter Management family products are renamed under vRealizeTM brand
vRealizeTM Suite is a Cloud Management Platform Suite

  • vRealize Operations Insight: add-on for vSphere with Operations Management (vSOM)
  • vRealize Air: SaaS offering for vCloud Automation Center
  • vRealize Suite: Complete set of VMware management products

Rebranding examples: vCloud Automation Center (vCAC) = vRealize Automation & vCenter Orchestrator (vCO) = vRealize Orchestrator…
vRealize Suite is the next step in the evolution of VMware’s cloud management family, shifting from a product to a platform strategy

End-User Computing:

  • VMware, NVIDIA & Google agreement for Graphics-Rich Applications.
  • VMware and SAP Collaborate around Mobile Security & Mobile Apps.
  • Workspace Suite: Mobile, Desktop and Content Management unified.
  • New Horizon DaaS Services and Expansion to Europe

vCloud Air (formerly vCHS):
In addition to existing IaaS, DaaS and DRaaS offering (see vCloud AirTM OnePager), VMware announced new services:

  • DevOps as a service
  • DBaaS: MS SQL and MySQL first. Other DB platforms will follow.
  • Object based storage (based on ViPR): extremely scalable, cost
  • effective, and durable storage for unstructured data.
  • Mobility Services:
    • with AirWatch: mobility management, mobile app develop…
    • with Pivotal CF Mobile Services.
  • PaaS : based on Pivotal CF
  • Cloud Management as a Service: vRealize Cloud Air Resources: