Design for operations to deliver on digital transformation
To deliver on digital transformation and improve business performance, enterprises are adopting a “design for operations” approach to software development and delivery based on intelligent automation at scale, and connecting ever-changing customer needs to automated IT infrastructure. DevOps is the set of practices that do this, enabled by software pipelines that support Continuous Delivery.
The utopia of software delivery is Zero Ops, where systems are self-healing and respond to events automatically — that is, with zero human touch. Enterprises can apply the new delivery approaches to both greenfield and legacy systems, using intelligent automation to get even more value from existing systems and to pay down technical debt, freeing up funding for digital transformation.
Continue reading or download the position paper.
Products and services pass through various stages of design evolution: from design for purpose, to design for manufacture, to design for operations. In information technology, for example, early systems such as Colossus were designed for one purpose: to decipher encrypted code. And packaged software from companies such as Oracle is a good example of designing for manufacture.
Designing for operations takes into account the end-to-end cost of delivering and using software. Good examples of companies that do this include Netflix and other software-driven services. They understand that they own the full cost of delivering their software and have optimized accordingly, using practices we now call DevOps.
The efficiencies that can be achieved with designing for operations mean companies running bespoke software (designed for purpose) and packaged software (designed for manufacture) have a maturity gap where the liability is greater than the value. If that gap can be closed, delivery can be better, faster and cheaper (no need to pick just two).
It’s essential to close that gap, because if competitors can deliver better, faster and cheaper, that puts them at an advantage. This even includes the public sector, since government departments, agencies and local authorities are all under pressure to deliver higher quality services to citizens with lower impact on taxation.
The reason we “shift left”
A typical outcome of the design-for-purpose approach is that functional requirements (what the software should do) are pursued in preference to nonfunctional requirements (security, compliance, usability, maintainability). As a result, things like security get bolted on later, and in many cases, lacking functionality starts to accrue as technical debt — that is, decisions that may seem expedient in the short term become costly in the longer term.
The concept of “shifting left” is about ensuring that all requirements are included in the design process from the beginning.1 In practice, that doesn’t have to mean lots of extra development work, as careful choices of platforms and frameworks can ensure that aspects such as security are baked in from the beginning. A good example of contemporary development practices that support this is manifested when we ask, “How do we know that this application is performing to expectations in the production environment?” This moves way past “Does it work?” and starts asking “How might it not work, and how will we know?”
The emergent practice of Observability Driven Development (ODD) is where applications and their deployment pipelines are instrumented so that issues that make it past established tests into production can be localized and remedied quickly. This builds upon established practices such as Test Driven Development (TDD) and Behavior Driven Development (BDD) by considering how an application interacts with monitoring and debugging.
Agile enterprises need DevOps The faster part of the better-faster-cheaper promise of design for operations isn’t just about straightforward speed to get products and services to market quickly (from concept to “ka-ching”). It’s also about “turning speed,” that is, to be responsive to changing customer needs and demands. Agile enterprises need agile software development, and that in turn needs agile infrastructure — on-demand, API-driven, pay-as-you-go — that is, a cloud.
As an agile enterprise, the way to achieve a high “turning speed” and connect everchanging user needs to agile infrastructure is by embracing DevOps. DevOps, which unifies development and operations, is the set of practices reflecting a high degree of collaboration and speed for software development and delivery.
The key characteristics of DevOps, commonly called the “three ways,” are flow, feedback and continuous learning by experimentation:
• Flow is about ensuring that work can move as quickly as possible into the operational environment. Flow comes from building pipelines that connect the software engineering process end-to-end, from capturing needs through to production infrastructure.
• Feedback is about finding problems at the earliest opportunity so that corrective action can be quick and inexpensive. Feedback comes from building tests into the pipeline so that errors can be caught. That means unit tests wherever possible — so that we test parts of our software in isolation where we can, because that’s fast and cheap, and use integration tests wherever necessary.
• Continuous learning by experimentation is about making it cheap and easy to try things that might be improvements, and also making it cheap and easy to take the improvements that work and incorporate them into the product or service. Achieving continuous learning by experimentation doesn’t come simply by building pipelines and the tests that go along with them. It takes work to realign culture around trying new things and seeing what works. But the advantages of becoming a learning organization make that extra effort worthwhile.
DevOps leads to continuous delivery (CD), which builds pipelines that connect user need to automated infrastructure. Although it’s possible to do continuous integration (CI) in the confines of an organization that splits between Dev and Ops, CD can only be achieved by connecting all the way through to production infrastructure, which necessitates Dev and Ops working together on automation.
Enterprises can take things further and implement continuous deployment, where code changes flow through the pipeline directly into production when tests pass. But this generally isn’t appropriate to organizations working in a regulated environment.
The “process break” in continuous delivery, where a decision is made to choose a particular release to deploy into production, can be imagined as the “big red button” that isolates automated testing from the production environment; that process break aligns well with many segregation-of-duties requirements placed on organizations, including those in healthcare, financial services, telecommunications and government.
Site Reliability Engineering builds on DevOps
Just as evolving requirements drove the need for closer cooperation between Dev and Ops, the future will require closer cooperation between cloud provider and cloud user, or customer. To borrow from author William Gibson: The future is already here; it’s just unevenly distributed. And that future is Site Reliability Engineering (SRE) and its offspring, Customer Reliability Engineering (CRE).
SRE is a variation of DevOps that emerged from Google and has found wide adoption elsewhere. With SRE, an SRE team takes over the running of the software from the Dev team. Although it’s just one of the successful approaches identified by Skelton Thatcher Consulting in its DevOps Team Topologies,4 it’s noteworthy due to its use in Google’s emergent practice of CRE.
Google has stepped across the traditional shared-responsibility line used by cloud service providers by implementing CRE — a more integrated and collaborative approach — with a subset of users.
To be sure, the line between cloud providers and cloud users has been helpful in terms of understanding who is responsible for what, especially in the realms of security and compliance. But just as the split between Dev and Ops proved to be counterproductive, the split between provider and user can also thwart optimal operations.
If a user’s cloud-based app needs to have high availability, then having the provider involved in joint operations with the user takes the latency out of escalations when issues arise. When the provider and user have common monitoring, a common approach to postmortems, a common understanding of service level objectives (SLOs) and shared on-call responsibilities, it becomes possible to act in unison when things go wrong. SRE/CRE provides a framework for establishing that joint operations model, which leads to better business outcomes.
Apply intelligent automation to reduce technical debt
Building continuous delivery pipelines is the way to connect user need to infrastructure for new applications or anything that’s being rewritten. That said, we have to be respectful of the value provided by legacy systems, which by definition provide business value at a price point that doesn’t support a business case for migration to the cloud.
However, that doesn’t mean those systems can’t be improved. By taking a datadriven approach and employing intelligent automation that incorporates analytics, lean and automation, we can get even more value from existing investments and pay down the technical debt of the past:
• Analytics get intelligent automation started by applying data mining, artificial intelligence (AI), predictive intelligence and other techniques to gain real-time insights into the business. A powerful combination of AI-driven process discovery and machine-learning-based tools can ingest operational and processoriented data to baseline current operations. Then data scientists can model the environment to identify operational constraints and come up with a hypothesis for improvement, an experiment that can be tried in the field, and a narrative to explain it to stakeholders.
• Lean eliminates inefficiencies, drives continuous improvement, optimizes workflows, and improves both quality and consistency — so that we automate an optimized process, not just the process that’s there already, which might be littered with organizational scar tissue from mistakes of the past. A team employing lean practices can analyze data streams, develop insights and drive improvements across its IT operations.
• Automation leverages the right tool for the job at hand in the appropriate context to automate tasks, processes and workflows, improving standardization, response times and accuracy.
Compared with earlier efforts that focused only on automation, this holistic combination delivers greater insights, speed, repeatability, scalability and efficiency. (See Figure 1.) Companies should embrace a holistic approach so they are not, for example, automating inefficient processes that should be eliminated altogether.
Intelligent automation: A holistic, data-driven approach to delivery
A holistic approach to intelligent automation can simplify and lower the cost of an organization’s IT operations, freeing up funding that can be invested in digital transformation.
Create a performance-oriented culture
Intelligent automation is a key component, then, in the “design for operations” approach. Intelligent automation is about both adopting new technologies and processes and changing an organization’s culture.
Lean development drives a continuous-improvement mind-set that benefits from experimentation, learning, and self-managed teams that break down data silos, choose their own tools, and deliver software quickly and frequently. This new culture improves the way information flows throughout the organization, which is critical to operations and performance. It is a performance-oriented culture, where information not only provides answers quickly, but also does so in formats that can be most effectively used.
In this culture, workers are rewarded for continuous improvement, including those who want to reinvent work using nimble processes and automation. Automation has great potential to make many jobs more meaningful, as it frees workers from repetitive tasks so they can focus on innovating in creative and practical ways.
However, for automation to improve the workforce, companies must take an active role in training employees for nontransactional work and give employees greater flexibility with their roles. There must be a willingness to innovate, including borrowing best practices from outside the company.
Accelerating business transformation
Enterprises need to adopt a “design for operations” model that includes a comprehensive approach to intelligent automation — and at the scale required by their size — to dramatically improve service delivery. This approach combines the three key elements — analytics, lean techniques and automation — to produce three important benefits: greater insights, speed and efficiency. It enables service-based solutions that are operational on Day 1.
Data-driven delivery methods are the way to go. Organizations can empower their delivery professionals to eliminate inefficiencies, reduce disruptions and accelerate resolutions. Organizations that apply intelligence, orchestration and automation to their offerings can quickly build and deliver repeatable offerings and solutions that help accelerate their digital transformations. It’s worth noting that the intelligence comes from processing data exhaust in streams that ultimately flow to a data lake that exists to answer the questions we forgot to ask; analytics for the things that you know are needed are baked into the operations environment, and added to as more needs emerge.
Partners can play a valuable role, too. That’s important, since business is now an outside-in phenomenon, meaning innovation can come from anywhere. But this also means that an organization’s platforms need to be open.
The utopia of software delivery is Zero Ops, where systems are self-healing and respond to events automatically — that is, with zero human touch. (See Figure 2.) Continuous development flows directly into continuous deployment, which includes automated testing and security. This reduces risk and speeds business outcomes.
We are on the path to Zero Ops, with “design for operations” an important step on the journey. Faster, smoother delivery that minimizes costs both accelerates digital transformation and frees up funding for investment in further transformation activities. That’s a win-win.
How DXC can help
New technologies and services are available to modernize service delivery. DXC Technology has a comprehensive approach to service delivery and digital operations called DXC Bionix™.
Bionix is our approach to intelligent automation that transforms enterprises digitally at scale. Three key elements make Bionix unique:
• Analytics and AI provide real-time insights into the business and operations, identify cost-reduction opportunities, and help deliver innovation. Bionix harnesses the power of data mining, deep learning and predictive intelligence.
• Lean process methodology continually improves solution delivery, eliminates inefficiencies and waste, and optimizes both workflows and team performance. These, in turn, help improve service quality, consistency and outcomes.
• Automation leverages technologies from the DXC Partner Network, helping remove manual labor from common tasks, processes, and workflows, as well as improving response times, accuracy and standardization.
Watch an overview of DXC Bionix.
DXC has been deploying Bionix at scale to deliver managed services to our customers. In addition to achieving transformative results, we have learned valuable lessons that we are applying in customer environments. Outcomes have included:
• A 50% to 80% reduction in time spent on operations
• A 25% reduction in testing costs; 50% fewer defects; and 60% less testing time
• A dramatic reduction in the time needed to deploy an application, from an average of 3 hours to just 15 minutes
• 70% reduction in resolution times and a greater than 3% improvement in availability SLAs
Underlying Bionix is Platform DXC, our end-to-end digital-generation platform for delivering and managing services. Platform DXC applies intelligence, orchestration and automation to ensure that our offerings are built for operations from Day 1. Platform DXC is the foundation for integrating DXC’s solutions, along with key partner intellectual property, and provides modular, agile, flexible services.
Bionix represents a major part of DXC’s own transformation, both technological and cultural, from waterfall development to DevOps and from IT outcomes to business outcomes. DXC has been applying DevOps, cloud-based services and lean techniques for years. Now we’ve committed our entire company, from solutions development to delivery, to adopt a DevOps mind-set. We can help your organization do the same, as well as maximize business performance and value through intelligent automation and digital-generation services delivery.
Now is the time to act. Don’t be disrupted — be the disruptor. Let us help you innovate and transform to differentiate with speed and quality.
About the author
Tim Henderson serves as director, Operations Engineering and Excellence, for DXC Technology’s global delivery organization. Tim leads the team responsible for the continual transformation of the company’s delivery capability, applying intelligent automation at scale to make DXC the most efficient, effective and valued corporation in the IT services industry. This multimillion dollar cultural change program leverages the latest thinking in data science, process optimization and leading-edge automation technologies to deliver customer outcomes and enable digital transformation for our customers.