curata__1b778fd225c6d175175f45b2f20b1559.png
curata__1b778fd225c6d175175f45b2f20b1559.png

Data Efficiency in Public Clouds

| By: (61)

Public cloud deployments deliver agility, flexibility and elasticity. This is why new workloads are increasingly deployed in public clouds.  Worldwide public IT cloud service revenue in 2018 is predicted to be $127B.  It’s powerful to spin up a data instance instantaneously, however managing workloads and storage still requires analysis, planning and monthly provisioning.  It would be extremely advantageous if public cloud storage capacity could automatically grow and condense to optimize utilization, but it can’t do that without IT intervention. IT operations need to focus on provisioning adequate resources while balancing performance and efficiency.

 

Data reduction technology, simplifies this problem.   For example, deduplication and compression typically cut capacity requirements of block storage in enterprise deployments by up to 6:1.  Some of these savings can be realized in reduced storage acquisition and operating costs, and some of this can be applied to provisioning additional headroom for rapidly growing storage requirements.   

 

For example, consider AWS pricing.

If you provision a 300 TB of EBS General Purpose SSD (gp2) storage for 12 hours per day over a 30 day month in a region that charges $0.10 per GB-month, you would be charged $15,000  for the storage.

 

With data reduction, that monthly cost of $15,000 would be reduced to $2,500.  Over a 12 month period you will save $150,000.  These savings can be utilized to provision more capacity, more hours or to provision IOPS.  Further, capacity planning is a simpler problem when it is 1/6th of its former size.  Bottom line, data reduction increases the agility of public clouds.

By: (61)

Tom Cook

Comments are closed