DevOps in the traditional enterprise – a leap ahead​ in release pipeline management.

Software release management is a challenging process for big enterprises.
While most enterprise applications are critical to running the business, most often we see that the process of releasing and updating the application version is painful and inconsistent.
Let’s identify some of the causes of this situation:

Low focus on automation: In most organizations automation efforts are invested in the delivery of virtual machines or shelf applications while the legacy monolithic apps are being pushed and updated in old manual ways.

Large, applications, a big chunk of code: a lot of the organizational apps are built as a big pile of code where every change needs to be heavily evaluated and can have a global effect on the whole app.

Manual QA and test processes: in one of my customer visits, after I raised the question “how are you testing your software? ” I got an answer I didn’t expect: “we are using code blindly and wait for the result from the users,” no efficient and automated way to do QA got them to skip the process to meet the company goals.

Orphaned scripts and workflows: while some departments try to write their own solutions to build and test the code, without proper integration and a base pipeline management system this work is quarantined to the specific department and have a minimal effect on the whole release process.

To achieve better efficiency and control of the release process a change in approach need to be taken.

First, we need to have a central pipeline management system that can host all of our different scripts and workflows that take effect in the release process.
This system needs to be able to connect directly or via API to all the source control, artifact management software, platforms, and infrastructure that is involved in releasing software.

Then we need to build the different pipelines that will be made out of stages (test, QA, staging…) tasks (run scripts, run workflows, get the binaries..) and some gating rules to manage the process (test acceptance result, human approvals…)

And last we need to expose these pipelines as managed services to the development organization.
This is the most essential part as the ability to run any pipeline in a click of a button and be able to see it running is the game changer for developers that can now build code more often while maintaining the reliability of the company software.
Visualizing and versioning the pipeline runtimes is also a massive leap as we can now expose every part of the software lifecycle to any stakeholder in the organization that doesn’t understand complex scripts or workflows.

To conclude; DevOps thinking and strategy in a traditional enterprise can have a significant effect on the business software reliability, thus helping the organization to evolve faster and step up the pace in the fast-changing world.

Pipeline as a service is a feature of VMware Cloud Automation services.
For more info click https://cloud.vmware.com/cloud-automation-services

 

 

vRealize Code Stream – Pipeline As A Service

logo

Introduction:

vRealize Code Stream pipelines can be executed in couple of different ways depending on the customer scenario.
One option is to use the execute button in the pipelines screen, this will execute the proper pipeline.
Another option is using the rest api to execute a specific pipeline.
There are some sample scripts and code in vRCS documentation that explain how to interact with vRCS Rest API.
In this post i will show how to use vRealize Orchestrator HTTP-REST plug-in to execute a vRCS pipeline.
This is very useful in a use case where we need outbound developers or DevOps engineers to trigger pipelines based on specific roles and permission.
It can also help where we are using vRO API as a central API for cloud and SDDC and need to trigger multiple actions in multiple products (vRA, vRCS) from external apps.

blog1

workflowStep1 -vRealize Orchestrator – Add A REST host

In the first step we need to configure vRCS as our REST host:

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST host workflow and fill up your vRCS host’s details as follows:
  1. add_rest_host-1
  2. add_rest_host-2
  3. add_rest_host-3
  4. add_rest_host-4
  1. in the last screen enter your tenant username and password to complete the process.

workflowStep2vRealize Orchestrator – Add A REST operations

Now we need to add some rest activities that will define our rest calls to vRealize Code Stream API.

REST Operation 1 – get vRealize Code Stream authentication token.

In this call we will authenticate against Code Stream and get the Authentication token, it will be used later in other API calls.
*The token is good for 24H

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST operation workflow and fill up the details as follows:

URL: /identity/API/tokens

  1. add_rest_operation-get token

REST Operation 2 – get vRealize Code Stream pipeline list.

In this call, we will use Code Stream API to get the pipeline list.
We will use this list later to choose the right pipeline to execute.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. Run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/API/release-pipelines

  1.  add_rest_operation-get-pipline list

REST Operation 3 – get vRealize Code Stream pipeline information.

In this call, we will use Code Stream API to get a particular pipeline information.
We will need to supply the pipeline name.
We will use this later to get the pipeline running status.

The information that comes out from this call is in JSON format.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/api/release-pipelines?name={pipeline_name} 

  1. add_rest_operation-get pipeline info

REST Operation 4 – execute vRealize Code Stream pipeline.

In this call, we will use Code Stream API to run a pipeline.
We will need to supply the pipeline ID.

The information that comes out from this call is in JSON format.

  1. Open your vRO client and go to Library> HTTP-REST> Configuration.
  2. run the Add a REST operation workflow and fill up the details as follows:

URL: /release-management-service/api/release-pipelines/{pipeline_id}/executions 

  1. add_rest_operation-execute pipeline

workflowStep3 – vRealize Orchestrator – Build vRO
workflow to run a pipeline.

You can just create a vRO workflow from each REST operation by running the “Generate a new workflow from rest process” workflow.
This will create a simple workflow that will let you input the right parameters and execute the particular RSET call.

* be sure to add the following line to your script to make sure you first authenticated to Code Stream before you run the REST operation (where the token is a string variable you need to pass with the token obtained in step 2 operation1):

request.setHeader(“Authorization”,”Bearer”+” “+token);

to make life easier I have created 3 vRO action to represent the REST operations and a sample workflow to run them.
here is the instruction on how to use it:

workflowImport the vRO Code Stream package

  1. Download my vRO Code Stream Package: com.vmware.codestream.package
  2. Go to vRO main screen and press the “import package” button.
  3. Choose the place to add the package.
  4. Find the “Run Pipeline” workflow.
  5. It will look like this:run_pipeline_schema
  6. Edit the workflow and go to the General TAB to see the workflow attributes.
  7. Edit the following attributes:
    1. username – your Code Stream username (user@domain).
    2. password – your Code Stream password.
    3. tenant – your Code Stream tenant.
    4. tokenRest – link to the get token REST operation.
    5. listREst – link to the get pipeline list REST service.
    6. getPipelineDetails – link to the get pipelines REST service.
    7. executePipeline – link to the Execute Pipeline REST operation.
    8. Save the “Run Pipeline” workflow.
  8. Run the pipeline.
  9. you will get a screen like this:
    run pipeline
  10. Choose the pipeline you want to run.
    run pipeline2
  11. if you want to supply pipeline parameters enter them in JSON format in the content section.
  12. Press Submit to run the pipeline.
  13. The vCO workflow will execute the pipeline and wait until it’s completed.

*Pleas note I haven’t implemented any exception codes in the workflow so if you need to run this in production some more work will be required…

 vRA Step4 – Create a vRealize Automation Advance Service to run pipelines

  1. Open vRA and go to the Advanced Services TAB.
  2. Press Add and select your workflow from the list.
  3. Press next until the wizard finish.
  4. Go to  Administration TAB and select Catalog Items.
  5. Press the Code Stream workflow you just added.
  6. Check a particular Service to insert the workflows too.
  7. Add a Code Stream logo icon to the service: vrcs_logo
  8. Press update.
    *You have to set the right entitlements to see the Code Stream workflow in your catalog, refer to vRA documentation if you don’t know how to do that…
  9. Go to vRA catalog.
  10. You will see a new catalog item like this:
    catalog_iteam-run pipeline
  1. Request the catalog item.
  2. You will get a screen like this:
    catalog_iteam-new request
  3. Press the checklist near the pipeline_name, and you will get the list of pipelines from your Code Stream server.
    catalog_iteam-new request list
  4. Choose your workflow to run, if needed add some parameters in the content section and press submit. (parameters need to be in JSON format{“parameter_name”:”parameter_value”})
  5. A new request will be initiated in vRA, the request will stay open until the Code Stream pipeline is completed.
    request


This is Just an example on how to use the vRO HTTP-REST to execute Code Stream pipelines.
To get the full list of Code Stream API functions and capabilities go to:
https://<vRCS-SERVER-FQDN>/release-management-service/API/docs/

Latest announsments from VMworld US 2014

Here is a summary of what was announced in San Francisco 2014…

EVO-RAIL:
Previously called Marvin in the press, this Hyper-converged infrastructure appliance is designed by VMware and built/sold by hardware vendors providing EVO: RAIL compatible hardware with VMware software on top. Purpose: SMB customers, ready in 15 min, scalable and complete in 1 SKU Competition: Nutanix, Simplivity, Scale Computing, Maxta,…EVO: RAIL Software included: vSphere Enterprise Plus & ESXi, vCenter Server, VMware VSAN, EVO: RAIL management GUI & vCenter Log InsightHardware specifications:

  • 2U 4-nodes hardware platform optimized for EVO:RAIL and provided by selected OEM partners.
  • Dual CPU sockets.
  • Memory: Up to 192 Gb
  • 16 Tb of storage on VSAN (HDDs & Flash)
  • Automated Scale-Out up to 4 nodes (HCIAs), that can support up to 400 server VMs or 1 000 VDI VMs

1st OEMs announcing EVO:RAIL HCIA: EMC, Fujitsu, Dell & SuperMicro

vCloud Suite v5.8:

  • Improved business continuity & disaster recovery.
  • improved SRM integration with vCAC, SRM can be now offered as a Service in vCAC’s self-service portal

vCloud Automation Center 6.1 is announced

  • Enhanced next-gen apps, such as Big Data Extensions for Hadoop 2.
  • Improved interoperability with NSX.
  • New proactive support: free Support Assistant vCenter plug-in.

vSphere 6.0: (beta)

  • Fault Tolerance will support 4 CPUs
  • Cross vCenter vMotion support
  • Long distance vMotion is enhanced
  • Using NSX, network properties now be kept on long distance vMotion
  • Openstack VIO (VMware Integrated Openstack, Beta) A standard Openstack in virtual appliance (OVA) that makes it easy for IT to run an enterprise-grade OpenStack on top of their existing VMware infrastructure.

VMware vRealizeTM:
vCenter Management family products are renamed under vRealizeTM brand
vRealizeTM Suite is a Cloud Management Platform Suite

  • vRealize Operations Insight: add-on for vSphere with Operations Management (vSOM)
  • vRealize Air: SaaS offering for vCloud Automation Center
  • vRealize Suite: Complete set of VMware management products

Rebranding examples: vCloud Automation Center (vCAC) = vRealize Automation & vCenter Orchestrator (vCO) = vRealize Orchestrator…
vRealize Suite is the next step in the evolution of VMware’s cloud management family, shifting from a product to a platform strategy

End-User Computing:

  • VMware, NVIDIA & Google agreement for Graphics-Rich Applications.
  • VMware and SAP Collaborate around Mobile Security & Mobile Apps.
  • Workspace Suite: Mobile, Desktop and Content Management unified.
  • New Horizon DaaS Services and Expansion to Europe

vCloud Air (formerly vCHS):
In addition to existing IaaS, DaaS and DRaaS offering (see vCloud AirTM OnePager), VMware announced new services:

  • DevOps as a service
  • DBaaS: MS SQL and MySQL first. Other DB platforms will follow.
  • Object based storage (based on ViPR): extremely scalable, cost
  • effective, and durable storage for unstructured data.
  • Mobility Services:
    • with AirWatch: mobility management, mobile app develop…
    • with Pivotal CF Mobile Services.
  • PaaS : based on Pivotal CF
  • Cloud Management as a Service: vRealize Cloud Air Resources:

Cloud Computing, SDDC and the hybridity concept explained

In some of the recent projects, I managed I find myself explaining to customers the basics of SDDC, cloud computing approaches and especially the difference approaches of the Private, Public and Hybrid cloud.

Public, Private, Hybrid, it all seems like seems like empty words when you are trying to attach in the real world.

Trying to understand how to solve day-to-day use cases, customers miss the evolvement the IT industry had gone trough in the last years.

Most customers today have some virtualized assets in they’re IT, lots of them trust virtualization in production and test&dev environment, But the big picture here is they have standardized they’re IT infrastructure on software than hardware.

For many years compute had been the only aspect of virtualization, today networking & security and storage are making their way in to complete the picture.

But what all of this has to do with SDDC and different cloud approaches?

For many years customers had struggled on trying to understand IT costs, how much resources to they need to run their business agiler and meet the business needs.

These tree pillars were always bottlenecked by the IT hardware architecture and mad it hard to plan or run IT like a business.

The NIST Definition of Cloud Computing Defined Cloud as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that are rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model has five essential characteristics, three service models, and four deployment models.”

Customers couldn’t create cloud environment based on the old hardware approach because of it lack the ability to create shared pools and on-demand resources.

When virtualization came along customers started to have more and more flexible, agile and on-demand environments there for we call them “Private Clouds.”

A couple of vendors took the agile on-demand data center approach and created pooled resources that customers consume over the Internet for external use or for burst resource need such as development environments etc…

We call this “Public Clouds” as they live outside of our organization and the level of connectivity between them and the customer environment is very limited.

cloud-it

Then came the “Hybrid Cloud” model, it meant to give a bridge from the private to the public cloud allowing the customers to extend they’re private clouds to the open world and move back resources from the public world back to the private.

The hybrid cloud model allowed customers for the first time to actually share resources between clouds, but how SDDC has to do with all that?

SDDC is the next generation of virtualization, it extends standard compute resources to virtual network, security, nd storage.

SDDC gives us a common infrastructure covering all IT aspects (Compute, Network, Storage) for both Private and Public cloud and enables seamless migration of loads between them.

With SDDC in place customers don’t need to “migrate” loads as the they seamlessly moved back and forth from private to public clouds and vice versa as if the were all in the same data center.

More important is that In the age of building hybrid cloud we need to consider a new term called “Hybridity.”

In cloud computing hybridity means that you can mix all kind of resources elements and technology between the Private and Public Clouds, it’s not infrastructure only anymore but also environments, applications, and policies.

hybridity give us the ability to create flexible application configuration and security policies that move together with they’re attached resource back and forth from one loud to another.

That way we can keep company policy and security guidelines when moving to a hybrid cloud approach.

Hybridity is enabled in a couple of different ways:

  • Attached to the platform / application only
  • Covers all infrastructure and application elements.

When covering only platforms / application we still need to consider a migration phase of the element between clouds, this limits our agility and ability to supply strict SLA’s

When covering all infrastructure and application elements, we get the real benefit of the hybrid cloud as all IT element are flexible and attached to every light element.

To conclude, the hybridity of all cloud features is the new key future of hybrid cloud, it’s a must for enabling a real hybrid cloud environment, and customers must look at it carefully when architecting their next generation architecture.

To contact me about this topic please send this form: