Demystifying total costs of cloud and IT infrastructure
The lure of clouds and modern IT infrastructure to lower costs, speed time to market and improve business outcomes has captured the attention of enterprises. Yet many firstgeneration cloud adopters, intrigued by the pay-as-you-go nature of the cloud and hyper-focused on price, did not have the ability to determine the true and total cost of the cloud. They now face increased costs and entanglements with workloads and applications that require re-evaluation. IT strategists and planners need to balance the “cloud first” mantra with a dose of realism. This paper explores factors to consider for unit cost modeling when determining total costs of cloud and IT infrastructure.
While enterprises have different perspectives and approaches to tackling IT challenges, the answer typically comes down to one thing: money. IT economics is one of the primary techniques that can help enterprises differentiate and choose among IT platforms, using cost as the metric.
DXC Technology provides guidance on choosing the best and most cost-optimized platforms and systems for business applications. We tap into our extensive expertise in delivering advanced IT solutions across the globe to help enterprises find the best path to digital transformation. To enhance our expertise, DXC uses a total cost of ownership (TCO) tool developed by Hitachi to help enterprises compare the total costs of different platforms at a high level, using unit cost modeling. We then drill deeper to provide a more detailed analysis. Let’s explore the various parameters that we consider when determining cloud and IT infrastructure TCO.
Platform principles
In general, one style of IT deals with legacy systems that are often on-premises. Another style of IT deals with new systems that are often in the public cloud. A third style is a hybrid cloud model, in which systems are a mix of on-premises private cloud and public cloud. Regardless of the style, several long-standing principles remain true when determining platform costs:
- Transformation of IT platforms must demonstrate a clear business benefit or differentiation.
- Organizations often need consulting help for analyzing, considering or changing IT modes.
- Planning is key to ensure that the IT style fits the required applications, or else unexpected costs could be incurred.
- The right platform can be defined in multidimensional terms of performance, cost, security, growth, management and risk.
With years of cloud history behind us, we have a clear understanding of what platform or architecture fits a specific business situation. DXC has helped enterprises across the globe identify and implement effective IT infrastructure solutions that deliver massive business benefits while reducing costs. In one case, Zurich Financial Group cut provisioning costs by 30 percent by moving workloads to the cloud.
There are industry and customer-tailored assessment methods that align operational qualities with business qualities, separate from the platform. DXC helps enterprises use IT economics and total cost models to identify the optimal platform delivery choices, based on unique customer operating conditions and cost sensitivities.
While it is important to maintain the status quo of buying (depreciating), operating and refreshing IT assets in the local data center, other options should also be evaluated. Cloud options include moving to private cloud, where assets are consumed but typically not purchased; a private cloud can be on-premises or offsite, and can be managed by an outside firm or by the current IT staff. Hybrid extends the private cloud, where some part of the infrastructure can be deployed in a public cloud.
By comparing total costs for each platform decision, DXC can assist enterprises before they make short- and long-term business commitments. We recommend the following multidimensional cost approach.
Total cost calculations and comparisons
Over the past few decades, we have learned that the total cost of IT is much more than the purchase price. For one, we now know that public cloud architecture costs are greater than the subscription rates. Much has been written about all the cost elements that make up IT TCO, including a white paper by Hitachi. Not all costs are equal in weight and impact.
Cloud costs also have a variability that is often overlooked. We frequently see cost exclusions related to cloud in areas such as added networks, onboarding, offboarding, overuse of tariffs and operational risks that include data center security, power, cooling and so on. A multidimensional approach is needed to define and optimize costs.
When customers ask how to reduce or measure costs, the answer always includes: “It’s complicated.” Using IT economics and then contrasting the do-it-yourself (DIY) versus private versus public cloud costs, we can quickly establish the “complicated” variables in creating cost and architecture comparisons.
The deterministic variables are:
Each industry has different regulations, security and protection needs. Even though no two companies are alike, within an industry, patterns and trends can affect solution costs:
- Federal standards, compliance regulations and security protocols can have a significant impact on whether some data storage or data processing can be done outside of a secured environment in the cloud.
- Data sovereignty requirements differ greatly by industry and country. For federal (and some state and local) organizations, data sovereignty introduces new risks when considering public cloud offerings. Public cloud providers can move data and processing to locations that are beneficial to the provider, but may therefore violate some data locality requirements.
- Some industries, such as telecommunications, prefer capitalization of assets.
Data infrastructure hygiene refers to protecting, preserving and optimizing the data infrastructure for superior performance. A top objective of this relates to minimizing costs related to operating and maintaining the data infrastructure. A key consideration is the age of assets, which affects decisions in this way:
- If the on-premises, owned assets are approaching end of life, it might be a good time to consider private or public cloud options for some of the assets.
- If the assets are relatively new, taking an asset write-down will only add extra costs to a cloud transformation. Newer systems should stay in place and be fully depreciated (“sweat the asset”).
- Sometimes refreshing old hardware with newer hardware will provide a lower total cost compared to the higher maintenance and environmental costs of keeping old gear.
Greenfield investment or transformation
When a new system or environment is being planned, hosting the greenfield environment in a public or private cloud is usually the lowest-cost option. This is especially true for quick-start development activities, where mean time to deliver a virtual machine (VM) is measured in minutes, not weeks.
For traditional systems, the transformation to the cloud can be an expensive, onetime investment. Not all applications or systems work well in the cloud. We tend to see high-cost cloud transformations where applications and systems not built for the cloud are moved there anyway, regardless of application needs.
Workload type
Data is the lifeblood of enterprise computing, and workloads are the means of dealing with the data. These workloads have a huge influence on cost decisions:
- Development/test. These systems often favor cloud because it can accommodate rapid deployment and the need to scale up and down.
- Internet of Things (IoT)/Analytics. This is a new category and poised to be cheaper in the cloud. Cloud vendors can offer data-lake and ingestion services for the back end, and front-end analytic tools in a turnkey package.
- Production applications. These are business critical and tend to be on-premises because performance requirements and network latency can be addressed. Also, processing and storage can be secured in a private facility and network.
- Virtual desktop infrastructure (VDI). VDI favors cloud or hosted cloud with many small operating systems and application servers.
- Database management systems (DBMS). These systems favor on-premises solutions because of security and performance requirements.
Planning horizon and growth over time
Short-term costs will differ from long-term costs, so it is relevant to understand the organization’s goals for cost improvement or cost measurement:
- Short-term perspectives tend to favor maintaining the status quo. Transformation or migration costs can be expensive, and therefore need to be amortized over several years.
- Assets that are needed only for a short time favor consumption clouds (private or public).
- For consumption solutions, it is important to factor in the contract term and onboarding time frame. Even though private and public clouds offer great flexibility and elasticity, there are usually minimum commitments in terms of infrastructure or contract terms.
- Data storage, backup and recovery almost always have separate commitments that need to be well understood contractually.
Infrastructure size and variability
- Very small and very uniform VM sizes (and therefore VM workloads) can favor a public cloud platform consideration. When VM sizes cover a wide range and the resulting workload also covers a wide range, local capitalization or local private cloud solutions will have better cost metrics.
- Very large and mission-critical VMs tend to favor DIY or local private cloud processing. Usually, larger VMs with mission-critical workloads are performance sensitive, so adding remote site network latency increases performance-related costs.
- There is no such thing as an average or regular VM, so in calculating comparative costs, you need to understand the quantities and varieties of VMs in the target environment, such as how many VMs you have and/or how many terabytes of storage.
- For storage only, you need to ask: What is the data type? What is the retention period? Is security or encryption required?
- Also consider workloads that depend on the underlying infrastructure. Applications that require a capability (e.g., storage replication) to operate have a role in the infrastructure decision process.
Help for IT planners
Through extensive modeling and thousands of customer interactions, DXC and Hitachi have observed patterns of cloud and local IT infrastructure costs. The following illustrates when each option is likely to demonstrate superior total cost over time.
When can a DIY/business-as-usual infrastructure win?
- When the customer’s current assets are 1 to 2 years old
- When the workload is critical, needs high performance and is well managed
- When data sovereignty and/or compliance requirements drive local protected assets
- When the applications are input/output intensive or access is high in a long-term retention plan
- When organic growth can keep up with the projected growth rate
- When usage of the asset is long term
When can a private managed cloud using converged infrastructure win?
- When the VMs are relatively diverse, especially with large storage requirements per VM, and the VMs tend to be large and extra large
- When there is a medium to long contract term (e.g., 3 – 5 years minimum)
When can a private cloud using hyperconverged infrastructure win?
- When the VMs are diverse but relatively smaller, especially with mixed or VDI systems
- When many small and elastic VMs are common in a development or testing environment
- When there is a medium to long contract term (e.g., 3 – 5 years minimum)
- When development and testing workloads are short term, and small VM platforms are used nondeterministically
- When horizontal scale and growth factors play a larger role in the direction of the business
When can a public cloud win?
- When the workload is development or testing, and in certain circumstances when the workload is mixed
- When the ramp-up time is long or the initial seed percentage is low for a private cloud
- When there is a high growth rate in workloads
- When the workload is cloud native
Cloud and IT infrastructure cost examples
Following are three cost examples for determining cloud and IT infrastructure options.
1. Capitalize, own and operate (Do it yourself/Business as usual)
Scenario: This large insurance company with branch offices nationwide has centralized processing for underwriting, payments and policy management. It is a large Oracle and SAP environment, with a customer relationship management (CRM) system for all client management. Most of the development and testing is outsourced offshore, so local development is minimal. The enterprise has 950 VMs (mostly large). (See Figure 1.)
Figure 1. VM configuration and distribution size for scenario 1
The current server and storage estate is roughly 3 years old. The data and application growth rates are a conservative 10 percent year-over-year. The client considered a 3-year private, managed utility option, and estimated the migration would take 18 – 24 months.
Results: The chief financial officer (CFO) was interested in sweating the current assets (2 to 3 more years of book value) and keeping the data and management local. The 3- to 4-year total cost estimate supported his view that owning, operating and managing infrastructure, including some technology refreshes, were better for the company in the next few years. (See Figure 2.)
Figure 2. Relative total cost over the contract term (3 years)
2. Private cloud with converged or hyperconverged infrastructure
Scenario: This manufacturing company has a large, centralized, shared services IT department in a major European metro area. Centralized processing, storage and archiving are needed for computer-aided design/computer-aided engineering data, workflow and logistics. SAP HANA is one of the critical applications. VM sizes vary due to web hosting, development/testing, manufacturing cell management, configuration management, etc. (See Figure 3.) Disaster recovery and data protection services would be included in the final managed services utility proposal.
Figure 3. VM configuration and distribution size for scenario 2
The current server and storage estate is about 4 years old, and plans are being made to consider CAPEX alternatives. The data and application growth rates are an aggressive 25 percent year-over-year. The elastic nature of workloads may shift between lower-cost hyperconverged infrastructure and converged infrastructure platforms.
Results: With a wide range of systems and data storage, a high growth rate, and transformation planned for this client, a private managed cloud utility will provide the best results over the medium to long term. Asset tracking and flexible consumption will avoid CAPEX waste from buying ahead. Some flex-up and flex-down will allow departments to better track and be accountable for assets and costs. A flow-through chargeback system will enable better IT capacity planning and overall IT cost assignment.
Furthermore, the high performance and data sovereignty and protection requirements can be maintained by nationalizing the data and databases. All critical assets can be on-premises in a secured environment. Colocation services can be provided for data protection and disaster recovery. Labor will be tailored to the functions the client wants to offload. (See Figures 4 and 5.)
Figure 4. Relative total cost over the contract term (4 years)
Figure 5. Total unit cost by VM size
3. Public cloud
Scenario: This media and entertainment content provider focuses its IT estate on applications development. Time to market is critical, with decentralized development teams and rapid game prototyping the core function of the application development group. Very small VMs are created, used and then torn down in weeks to support a very agile application development and prototyping business model. Annual IT budgets are difficult to forecast, given the elastic nature of workloads and contract developers located around the world.
Once applications have been through a development life cycle, they are placed in a controlled environment for long-term support and legal copyright protection. VM and storage sizes are small. The balance sheet and operating income are a better match for OPEX spending in IT. The platform and infrastructure commitments are typically 3 to 4 years, but the anticipated growth rate may drive other behaviors. Management tends to focus on short-term cost efficiencies.
Results: Purchasing and capitalizing infrastructure are not viable options. Data center space is limited, and investments are targeting applications, not infrastructure. Current assets will be extended until a cloud strategy is finalized. Current and new growth are best suited for a public cloud offering, where VM acquisition, setup, teardown and commitments best fit the development cycle. Year 1 unit costs favor public cloud options. (See Figure 6.)
Figure 6. Relative unit cost per average VM per month (year 1)
Over 3 to 5 years, the total cost per average VM is just slightly better than a private managed cloud. However, with management focusing on near-term results, the Year 1 cost benefit drives management to favor public cloud.
It’s about economics
Enterprises need to take a structured approach to IT and cloud economics in their planning process. The multidimensional analysis presented in this paper provides a general guideline for determining the optimal path from a cost perspective.
Enterprises also need to look beyond obvious costs and understand the multiyear total costs of available options. For example, cloud broker services can be effective, but they can also be expensive.
DXC can serve as an experienced trail guide by providing the tools, expertise and steady hand needed to embark on the cloud journey. DXC can review cost comparisons for specific options using the TCO tool developed by Hitachi to give you an objective view of what your projected costs will be. This will help you understand where you are headed and what to concentrate on, so you can focus on a more detailed platform analysis for options that make sense economically.
DXC has the experience and expertise to help guide you on your digital journey, from helping with cost analyses of cloud IT infrastructure options — both high-level and detailed — to designing, implementing and running the solutions to power your business.