Articles on the evils of public cloud costs fill our newsfeeds and capture headlines on a daily basis. A reflection of its significance is a concern among those already using the cloud as well as a major worry for those considering their first moves into the cloud. However, across all the pundits' views on addressing the cost through waste management, broker dealing, and the new panacea of hybrid cloud, one key element of strategy is missing—recapturing the compute resource potential in existing desktops and servers already part of an organization's IT infrastructure.
We often lose sight—under the constant noise of cloud messaging—that most 2017 IT spending went to hardware and software over cloud technologies and that desktops and laptops occupy the top two spots (source 1), which is a trend predicted to continue this year and next. Couple this with the fact that, for the oft neglected PC, the CPU is more than 80% available 80% of the time (source 2), and any GPU remains mostly unused for office applications, resulting in a massive, untapped compute resource pool that every organization is ignoring . . . not to mention that even servers are estimated to be only 45% utilized (source 3). This existing computer hardware pool represents the lowest hanging fruit—or the fruit already in the basket—when it comes to IT budget costs savings. An average 1,000-person company with an estimated 500 desktops and 100 servers utilized could see $100,000s of saved cloud spend. Calculate it
The savings come not only from now being able to keep workloads on-premise but also from being able to repatriate existing workloads out of the cloud and back in-house. As organizations traverse their cloud transformation journey—expounded by 451 Research (source 4)—repatriation reduces the expenditure curve irrespective of other cloud optimization efforts. However, with a foundation of recapture in place, the cost impact of any repatriation effort is magnified. Together, recapturing and repatriating represent an evolution in approach from one of multi/hybrid-cloud to one of multi-resource that would accelerate the pace of repatriation beyond the 20% of cloud users who have already moved one or more workloads from a public cloud environment or the 10% planning to do so in the near future (source 5).
But this is not where the story ends. Not all cloud resources are purchased on-demand—as to do so is often cost prohibitive; reserved instances and volume discounts are cheaper, while spot pricing may not be production practical. So, when workloads are repatriated in-house, what is an organization to do about its cloud commitments? Should they use a marketplace to resell them? Why do that when we all know that digital transformation is a continuous march onward, requiring ongoing legacy application replacement and new application experimentation, development, and deployment. Instead, repurpose them. In fact, digital transformation is a process not a destination, and increases in cloud expenditure over time should be expected. They will just be smaller without the three r's and the cloud only used when it is the best option even as reserved instances, which can be repurposed for the next project if production deployment on the current project demands it be in-house.
In school the three r's were the solid underpinning for all that was to follow. Today is no different. Once adopted, the three r's—recapture, repatriation, and repurpose—provide a dynamic, interconnected basis for ongoing cloud cost management. However, it goes beyond cost and lays the foundation for achieving what the ultimate goal should be: optimal workload placement—putting the right workload on the right resource for the right reasons. Yes, a big reason is cost, but security, compliance, network latency, and application performance are also among the other vectors that go into the decision-making process.
If we look toward the future as edge and IoT resources enter the mix, the continuum of compute resources, which workloads can be deployed, stretches even further. A multi-resource, three r's strategy, still holds true. Imagine the amount of recapture potential a telecommunications, media, and entertainment company with tens of thousands of desktops, thousands of in-house servers, thousands of edge servers, and hundreds of thousands of cable boxes could bring to bear in saving cloud costs.
Source: 2017 Business IT Trends Annual Report - Increase in IT spending for US businesses, August 2017
Source: Characterizing and Evaluating Desktop Grids: An Empirical Study
Source: Quantifying Datacenter Inefficiency: Making the Case for Composable Infrastructure, IDC
Source: The cloud transformation journey: Great expectations lead to a brave new world, 451 Research, February 2018
Source: Voice of the Enterprise study conducted by 451 Research, 2017.
Kevin Hannah is Kazuhm's director of product operations leading product delivery and support to ensure customer success with the company's application Platform-as-a-Service (APaaS) solution. Prior to Kazuhm, Kevin held a number of senior positions ensuring customer success from both within, as a vendor, and without, as a customer adviser. This includes managing West Coast client research for AMR Research, now part of Gartner.
Kazuhm's LinkedIn: https://www.linkedin.com/company/kazuhm/
Kazuhm's Twitter: @Kazuhm_Co
Kazuhm's tag: #GYHOTC #GetYourHeadOutOfTheCloud