Software release management is a challenging process for big enterprises.
While most enterprise applications are critical to running the business, most often we see that the process of releasing and updating the application version is painful and inconsistent.
Let’s identify some of the causes of this situation:
Low focus on automation: In most organizations automation efforts are invested in the delivery of virtual machines or shelf applications while the legacy monolithic apps are being pushed and updated in old manual ways.
Large, applications, a big chunk of code: a lot of the organizational apps are built as a big pile of code where every change needs to be heavily evaluated and can have a global effect on the whole app.
Manual QA and test processes: in one of my customer visits, after I raised the question “how are you testing your software? ” I got an answer I didn’t expect: “we are using code blindly and wait for the result from the users,” no efficient and automated way to do QA got them to skip the process to meet the company goals.
Orphaned scripts and workflows: while some departments try to write their own solutions to build and test the code, without proper integration and a base pipeline management system this work is quarantined to the specific department and have a minimal effect on the whole release process.
To achieve better efficiency and control of the release process a change in approach need to be taken.
First, we need to have a central pipeline management system that can host all of our different scripts and workflows that take effect in the release process.
This system needs to be able to connect directly or via API to all the source control, artifact management software, platforms, and infrastructure that is involved in releasing software.
Then we need to build the different pipelines that will be made out of stages (test, QA, staging…) tasks (run scripts, run workflows, get the binaries..) and some gating rules to manage the process (test acceptance result, human approvals…)
And last we need to expose these pipelines as managed services to the development organization.
This is the most essential part as the ability to run any pipeline in a click of a button and be able to see it running is the game changer for developers that can now build code more often while maintaining the reliability of the company software.
Visualizing and versioning the pipeline runtimes is also a massive leap as we can now expose every part of the software lifecycle to any stakeholder in the organization that doesn’t understand complex scripts or workflows.
To conclude; DevOps thinking and strategy in a traditional enterprise can have a significant effect on the business software reliability, thus helping the organization to evolve faster and step up the pace in the fast-changing world.
Pipeline as a service is a feature of VMware Cloud Automation services.
For more info click https://cloud.vmware.com/cloud-automation-services