Data Efficiency in the News

curata__0829fcdbe39ad1fd38e48b6c0a10a2e2.PNG

Hybrid Cloud: The new IT service platform?

| The Register
curata__0829fcdbe39ad1fd38e48b6c0a10a2e2.PNG

So. Hybrid cloud. Let’s start with a quick definition, courtesy in this case of TechTarget which describes it as: “a cloud computing environment which uses a mixture of on-premises, private cloud and third-party, public cloud services with orchestration between the two platforms”. I like this particular definition as it sums it up nicely: note that by “private cloud” we mean an on-premise virtualised server and storage setup.

It’s not all in the cloud …

How many of us are so averse to on-premise computing that we’re determined to move everything out to the cloud? If we consider the desktop as an “on-premise” computing element then next to none of us: frankly I have a very hard time understanding why one might want to replace every desktop computer with a virtual desktop that sits somewhere in the cloud. But even if we’re excluding the desktop from the equation, it’s still a stretch moving everything to the cloud.

… but it’s not all in-house either.

If we look in the other direction, how many of us are so averse to cloud computing that we keep absolutely everything in-house? And I mean everything. Would you have an in-house DDoS protection system, for instance? Surely it’s better for DDoS to sit outside your network, as your ISP’s upstream pipe is likely to be fatter than yours and hence less easily saturated. Similarly, spam and malware protection for inbound email: there’s a lot to be said for buying an external service so that malware never gets near your own systems.

Let’s face it, it’ll be a mix

These examples could be considered infrastructure utilities, though, and not applications. So what about the apps? Well, it’s becoming increasingly attractive to host corporate email and HR systems in the cloud: to run them internally needs much kit, storage and effort. Similarly there are times (perhaps when an app requires direct connection to an in-house system or runs on an unusual – i.e. non-Intel/Linux/Windows platform) when putting stuff in the cloud isn’t possible. Finally, disaster recovery (DR) in the cloud is increasingly popular – it’s usually a whole lot cheaper than paying for on-premise space in a second data centre.

Given that in the average case we’re likely to wind up with a hybrid cloud setup, then, what matters is that we consider the various items of glue that stick the components together.

It’s an “it”

The first essential consideration for any hybrid cloud setup is that you think and act as if it’s a single entity. So to the absolute maximum extent possible you should avoid thinking of “the on-premise elements” and “the cloud elements” and think of it as a single, integrated system that spans both worlds. This doesn’t mean that you’ll ever get to the stage where the setup looks completely like a single entity, but you should try to get as close as you can.

Summary

Looking back to the first sentence of this article, we wrote that hybrid cloud is: “a cloud computing environment which uses a mixture of on-premises, private cloud and third-party, public cloud services with orchestration between the two platforms”.

There’s one word in that definition that’s utterly crucial to a hybrid cloud setup: orchestration. The most important thing with hybrid cloud is that it acts, to the greatest possible extent, as a single entity that just happens to be spread between locations, providers and technologies. At the basic level you can make in-roads to unifying the management of the elements – and you can help yourself achieve this by considering the range of providers and platforms you intend to use before you start to build (let’s not run up an Azure cloud and slap in a VMware on-premise infrastructure, for instance, because it’s going to cramp your style integration-wise).

Managing the lower layers won’t be entirely unified, but the cloud providers and the third party software suppliers will at least give you a leg-up and make it look like something other than a collection of unrelated islands.

But with sensible design and a decent network architecture, everything from the OS upwards can be made to look like a unified infrastructure to the server, OS and application guys – while the governance and compliance team look on happily.

 

FYI this is an abbreviated version of the source article. There are more details  on Integration Tools, OS, VM, Security and Apps in The Register version…click on the READ MORE link below to get there… 

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Cloud IT Infrastructure Market Shows Sluggish Growth in the Short-Term

| tmcnet.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

A new report by research group International Data Corporation (IDC) entitled, “Worldwide Quarterly Cloud IT Infrastructure Tracker” found that vendor revenue from the sales of cloud IT infrastructure products (server, storage and Ethernet switch), including public and private cloud, grew by 3.9 percent year over year to $6.6 billion in the first quarter of 2016. The relatively modest growth was due to contracting demand in the hyperscale public cloud sector.

IDC said the growth in the market contracted not because of declining demand, but because of atypically high growth in 2015 by comparison, a trend that is expected to continue in the very near future, with hyperscale demand expected to return to higher deployment levels later this year.

“A slowdown in hyperscale public cloud infrastructure deployment demand negatively impacted growth in both public cloud and cloud IT overall,” said Kuba Stolarski, research director for Computing Platforms at IDC, in the news release. “Private cloud deployment growth also slowed, as 2016 began with difficult comparisons to 1Q15, when server and storage refresh drove a high level of spend and high growth. As the system refresh has mostly ended, this will continue to push private cloud and, more generally, enterprise IT growth downwards in the near term.”

The study found the lion’s share of revenue in the cloud IT market is controlled by four companies: Hewlett Packard Enterprise, Cisco, Dell and EMC, and these four companies experienced revenue growth for the quarter. Market players NetApp, IBM, Lenovo (NewsAlert) and IBM Direct saw their revenue in cloud IT infrastructure contract for the quarter.

 

Read more

curata__axqiibvg88gEmg9.jpeg

Microsoft and Red Hat now working together to bring Azure Linux support to government

| WinBeta
curata__axqiibvg88gEmg9.jpeg

Microsoft has announced today an expansion of its partnership with Red Hat that has led to government organizations being able to utilize additional migration options when moving their current Red Hat subscriptions to Microsoft Azure Government. Specifically, customers using Red Hat’s Cloud Access tool can now move their RH subscriptions between physical or on-premises systems to both Microsoft Azure and Azure Government Cloud platforms, according to a press release from Microsoft.

For Red Hat customers still hesitant about transitioning to either of Microsoft’s governmental cloud solutions, the company would like to remind them that “Microsoft Azure Government fulfills a broad range of platform level compliance standards critical to U.S. federal, Department of Defense (DoD) and state and local requirements.”

Microsoft is encouraging interested parties to register their newly purchased Red Hat subscriptions with Azure Government as soon as possible. Customers can also seek step-by-step directions for Red Hat Cloud Access migration to Microsoft Azure Government migration at the RedHat.com site.

Read more

curata__XjEAnOGTVyLM4FC.png

The Age of IT Efficiency Has Arrived

| datacenterpost.com
curata__XjEAnOGTVyLM4FC.png

Change is hard to explain, especially when you are in the middle of it.  Take the massive shift to cloud computing occurring today.  For quarters on end, major infrastructure vendors were missing their numbers and claiming that they were merely experiencing “temporary slowdowns” or “regionally soft demand.”   Rather than challenge these claims, the financial and industry analysts fell in line, suggesting turn-arounds were, “right around the corner.”  But, the turn-arounds didn’t happen.  While infrastructure giants were sailing directly into the wind, Amazon quietly expanded Amazon Web Services (AWS), Microsoft reinvented itself, VMware repositioned, and open source heavy-weights such as Red Hat, realigned products and service offerings around hybrid cloud business models.  The Hybrid Cloud exploded right before our eyes – and with it has come the Age of IT Efficiency.

IDC reports 82% of enterprises have a hybrid cloud strategy and are actively moving workloads to the cloud.  How did the shift to cloud happen so fast?  Simple.  Just follow the money.

Jeff Bezos of Amazon has said, “Your margin is my opportunity.”  The shift to cloud occurred and an entire ecosystem (not just Amazon) aligned to support the shift. Open software defined infrastructure vendors like Red Hat, SuSE and Canonical, white box hardware vendors such as Foxconn and Quanta, public cloud providers like Amazon, Microsoft and Google, and services companies like Mirantis came to provide low cost and highly efficient compute, network and storage solutions. As a result, hybrid cloud demand surged.  In fact, IDC estimates the hybrid cloud will account for 80% of IT spend by 2020. And owing to higher utilization rates, lower pricing, and greater density, hybrid cloud solutions cost a fraction of proprietary hardware products.

Hybrid cloud or more specifically open hybrid cloud is on the way to becoming the leading enterprise IT architecture, ushering in the Age of IT Efficiency.

Hybrid Cloud Changes Everything

The flight to IT efficiency started just 10 years ago with Amazon Web Services. Today, large public cloud services deliver extreme IT agility at significantly lower cost when compared to yesterday’s data centers.  Because of this flexibility and efficiency, many smaller organizations have moved entirely to the cloud to meet their day-to-day IT business needs.

While larger organizations, use the public cloud for some projects, they face challenges around data proximity, security, and long term project/investment lifecycles that make on-premises (and/or private hosting) data center infrastructure the correct fit for other applications.

As Fortune 1000 companies explore hybrid cloud, they discover that they can substantially lower their costs if they simply “do it like Amazon.” Like Amazon, they can build hybrid cloud data centers that derive efficiency from four key technologies: Virtualization, Open Source Software, White Box Hardware and Data Reduction.

-Virtualization drives up utilization rates by supporting more workloads per server.  The net effect is that cloud IT organizations are able to increase the density of their data centers saving substantial costs in real estate while gaining a tremendous amount of operational elasticity. The same hardware used at 2 p.m. for one workload, can be repurposed at 3 p.m. for another.

Open Source software and open collaboration via Open Source frameworks has established
a huge ecosystem of developers (spanning across industries and academia) driving innovation in massive scale-out infrastructures of the cloud data center. These projects are focused on scalability, reliability, security and manageability. OpenStack and Linux itself are two great examples of open source projects that contribute tremendously to cloud implementations.

The availability of commoditized “white box hardware” facilitates the cloud revolution.  In the past, traditional IT environments required “branded hardware” to ensure IT had the reliability and performance it needed to operate. Today, as industry analyst Howard Marks of DeepStorage.net notes, “If you care about whose equipment is in your cloud… you’re doin’ it wrong!” Advancements, both in commodity hardware components and software, have enabled cloud IT organizations to use lower cost white box hardware in the largest data centers on the planet. And, every year the cost of those components drop as competitive market forces and technical efficiency gains drive better economics. This phenomenon has enabled cloud data centers to build extremely cost effective and efficient IT infrastructures.

The final frontier of the hyper efficient data center is data reduction.  These data centers combine fast direct-attached storage solutions with modern object-based cloud storage facilities (low-cost, bulk data storage). As a result, software defined hybrid cloud deployments benefit substantially from data reduction that combines inline deduplication, compression, and fine-grained thin provisioning to increase data center density and dramatically decrease compute and storage costs in the hybrid cloud. The net result of increased density is that more data is stored in less space and consumes fewer resources reducing total costs.

From the storage perspective, hybrid cloud data centers are moving to multi-petabyte scale. At that level, savings isn’t just about spending less on HDD or flash.  Instead the big savings from data reduction are derived from increased data density. With data reduction optimizing density of existing data centers is simple, fast and far more compelling than bearing the cost of new data center space.  This density increase also dramatically cuts the cost of power, cooling and administration. Once the infrastructure is optimized for hybrid clouds with virtualization, open source operation software and white box servers, the next step in efficiency is modular data reduction!

The Data Center Density Challenge

High density data centers are part and parcel of the new hybrid cloud infrastructure landscape.  Data Center Journal’s What Does High Density Mean Today?  points out the challenges of high density data centers including questions about power and cost.  Gartner predicts that by 2015 50% of data centers will have high density zones enabled by high density storage arrays and servers. Are we already there?

Data Center Journal’s Is Cloud Computing Changing Data Centers? describes the economic drivers behind the data density issue by discussing IT infrastructure budget limitations, the variable cost of today’s capital expenditures for data storage and the business agility needs. It also highlights the power and cooling challenges as data centers continually expand to meet data storage needs that are beginning to reach critical mass.

Recent research from Peerless Business Intelligence highlights the importance of data reduction to high density data centers.  In Cloud Data Center Economics: Efficiency Matters, Lynn Renshaw discusses the need to “rise above” hardware and the physics of space and power/cooling and to look at the bigger picture of data center costs on a square foot basis.  While data centers are reaching power and cooling density limits, the cost per square foot of building data centers continues to increase and is becoming prohibitive for most businesses.  Today, there are over a billion square feet of data centers. As we continue to store more information, consume more storage and processor cycles and utilize more power, there are limited physical options available to increase data center density.

Renshaw takes us to the obvious next step of leveraging software and in specific data reduction software to store more data in less space, thereby reducing the square footage demand.  As her “back of the napkin” sample calculation demonstrates, cloud data centers can realize substantial space savings, by leveraging data reduction software. Her example shows how a 100,000 square foot facility can save over $74 million in costs. Data reduction software not only reduces the amount of data stored, it also lowers the number of storage arrays and as a result power/cooling costs and the square footage they consume in a data center.

Taking her thesis a step further, data reduction increases data center density and as a result reduces the need for data center construction!  At today’s costs of $3,000 a square foot, that’s a compelling argument!

Renshaw states the obvious: “Cloud growth is inevitable, but let’s do it with a smaller footprint.”

Conclusion

IT infrastructure is at an inflection point and change is all around! We saw infrastructure giants under extreme business pressure from hyper scale cloud providers that grabbed market share because they delivered lower price points, simplicity and business agility. The Age of IT Efficiency had arrived.

Led by Amazon, “the cloud” evolved rapidly as a business option for data storage and compute. As a result, open software players such as Red Hat, Canonical, and Mirantis (to mention a few) rose in prominence and are seeing rapid growth because they deliver efficiency in cost and operation and higher data density.

The hybrid cloud is now the implementation of choice for IT infrastructure because the combination of data in the public cloud and on-premises create a solution that delivers increased agility at the lowest cost. This has been enabled by white box hardware, software for virtualization, open source operating software and data reduction software. IT infrastructure will be open, flexible and highly efficient as The Age of IT Efficiency is now upon us!

Read more

curata__93b34be0683b1a0c2be6a4d79eda1865.png

Software-Defined Storage meets Deduplication and Compression. Now available from LINBIT

| PR Newswire
curata__93b34be0683b1a0c2be6a4d79eda1865.png

NEW ORLEANS, July 25, 2016 /PRNewswire/ — Permabit Technology Corporation the open source data reduction experts, and LINBIT, the market leader in open source High Availability (HA) Clustering and Geo-Clustering software, announced a partnership today at HostingCon 2016.

Together, Permabit’s VDO data reduction for Linux® and LINBIT’s DRBD® software, allow enterprises to replicate data in a distributed storage cluster with high speed data deduplication and compression, maximizing bandwidth efficiency. In LINBIT’s tested dataset, pairing DRBD with VDO reduced the replication network and storage utilization by ~85% while increasing load by ~0.8. The full results are located here.

Since 2001, LINBIT has been developing and supporting the Linux replication software, DRBD. In 2005, LINBIT created a Geo-Clustering software called DRBD Proxy, primarily used for Disaster Recovery (DR) purposes. LINBIT’s HA and DR software works on any commodity hardware, and is now being used as a Software-Defined Storage (SDS) platform, perfect for cloud and virtualization environments. Since the DRBD software replicates data at the block level, users can replicate any filesystem, VM, or application that runs on Linux.

“For years, enterprises who were tired of paying proprietary SAN vendors have been switching to DRBD on commodity hardware for resilient storage. More recently, private and hybrid cloud buyers have been demanding open solutions to data storage,” said Greg Eckert, Business Development Manager for LINBIT. “By partnering with Permabit, our clients can use LINBIT’s Software-Defined Storage approach to beat both the performance and cost of proprietary hardware vendors.”

The leader in data reduction technology, Permabit Technology Corporation recently announced the latest release of its Virtual Data Optimizer (VDO) software, VDO 6 – the only modular data reduction solution available for the Linux block storage stack.  VDO delivers the company’s patented deduplication, HIOPS Compression™ and thin provisioning in a commercial software package for Linux for enterprise hybrid cloud data centers and cloud service providers.

“Permabit and LINBIT have worked together to ensure that VDO, the only complete data reduction solution for the Linux storage stack, works flawlessly with LINBIT’s DRBD to minimize bandwidth requirements for HA and DR,” said Louis Imershein, VP Product for Permabit. “The block level approach used by both solutions makes the combination of VDO and DRBD an ideal solution for handling DR in hybrid cloud deployments.”

To learn more about Permabit VDO Data Reduction software visit:
http://permabit.com/products-overview/albireo-virtual-data-optimizer-vdo/

To learn more about the DRBD software visit:
http://www.linbit.com/en/p/products/drbd9

About LINBIT
LINBIT creates the world’s fastest Software-Defined Storage. The company works with industry leaders from the storage and network sectors, designing next-generation, mission-critical infrastructures. Major cloud solutions providers, data center operators, OEM and ISV integrators and commercial enterprises employ LINBIT’s open source software DRBD to ensure High Availability and Geo-Clustering replication. LINBIT is privately-held and headquartered in Vienna, Austria and Portland, OR. Visit us at www.LINBIT.com/en/.

About Permabit:

Permabit pioneers the development of data reduction software that provides data deduplication, compression, and thin provisioning. Our innovative products enable customers to get to market quickly with solutions that cut effective cost, accelerate performance, and gain a competitive advantage. Just as server virtualization revolutionized the economics of compute, Permabit software is transforming the economics of storage today.

Permabit is headquartered in Cambridge, Massachusetts with operations in California, Korea and Japan. For more information, visit www.permabit.com.

Follow Permabit on Twitter and/or LinkedIn

Read more

curata__Wry3oAzxTkIqZb5.png

RAIDIX 4.4 Data Storage System Raises the Bar for Enterprise Workflow…

| Press Release Services
curata__Wry3oAzxTkIqZb5.png

RAIDIX 4.4 Data Storage System Raises the Bar for Enterprise Workflow Processing Share Article RAIDIX unveils a new edition of its data storage system, ver. 4.4. The new RAIDIX further improves performance of standard corporate procedures, such as database access or transactional operations. RAIDIX. Professional data storage solutions Stansstad, Switzerland (PRWEB) July 19, 2016 The R&D of RAIDIX, a leading data storage solution provider, announces the official release of RAIDIX 4.4.

What’s new in RAIDIX 4.4?

Random access optimization (RAO)

RAO is a brand-new feature that allows shiny performance gains and infrastructure savings for enterprise customers by using data deduplication. RAO may be applied to any particular volume. The functionality caters to random operations, such as database and transactional interactions.

Random access optimization enables fast resolution of business tasks and boosts data processing from enterprise applications (CRM, ERP, corporate email, etc.).

The technology builds on:
Data deduplication for space economy and easy virtualization
— Thin provisioning of system resources to extend logical disk capacity.

Advanced redundancy

RAIDIX 4.4 revamps multi-path input/output (MPIO) by adding support for the built-in Microsoft DSM (device specific module) instead of previously used in-house DSM driver.

RAIDIX provides Standalone Storage Appliances as well as Scale-Out NAS / Shared Storage solutions. The Scale-Out edition scales exponentially while maintaining a single namespace. The system supports heterogeneous client OS via SAN and shares the same data via NAS. It provides full compatibility with third party software and operates without a hitch on a multitude of hardware configurations.

The RAIDIX product line is distributed through the global partner network. To request purchase guidelines, clarify business/technical objectives and locate the right partner in a specified area, contact RAIDX Sales Team at request(at)raidix(dot)com.

Read more

Global Backup as a Service Market 2016-2020 to grow at 27%

| tmcnet.com

The global backup as a service market to grow at a CAGR of 27.5% during the period 2016-2020.

The report covers the present scenario and the growth prospects of the global backup as a service market for 2016-2020 This report considers revenue generated from backup services offered on cloud platforms. The services from online backup service and cloud backup service providers are included in the market size.

With the rise in data explosion, a large number of organizations have supported the need for replication of data as backup and frequent data protection. Challenges such as server collapse and mismanagement of large company data have forced organizations to look for a substitute for traditional backups, as these are costly to maintain and require continuous monitoring.

A trend which is propelling market growth is the availability of cloud backup for virtual machines. Virtualization is now an integral part of IT infrastructure. Many enterrises have committed to pursue virtualization beyond 50% benchmarks over the next few years. The technology offers many business benefits, including rapid provisioning and better utilization of system resources. Many organizations are seeking to implement agentless solutions for the backup and recovery of virtual machines. However, technological limitations (as of 2015) have compelled enterprises to use separate solutions for backing up physical and virtual machines.

Read more

curata__zyXLcxqnaYjUl1D.jpeg

EMC Storage Hardware Sales To Remain Suppressed, Services To Drive Growth

| nasdaq.com
curata__zyXLcxqnaYjUl1D.jpeg

Over the last couple of years, EMC has witnessed a slowdown in its core information storage business, with its subsidiary VMware ( VMW ) driving much of the growth.

The net revenues generated by EMC’s information infrastructure segment, which includes product and services revenues for storage hardware, content management and information security, fell by 6% on a y-o-y basis to $3.4 billion through March quarter. On the other hand, the combined revenues of VMware and Pivotal were up almost 7% to $1.7 billion in the same period. The trend is expected to continue through the June quarter given the weak global spending on storage hardware. The company has not provided any guidance for 2016 due to the pending acquisition by Dell , which is likely to be completed by Q3 this year.

According to IDC-reported data, EMC’s share in the storage systems market has fallen considerably over the last couple of years, a trend consistent across most large storage vendors, including NetApp, HP Enetrprise, Hitachi and IBM.

 EMC’s share in the market stood at 24.9%, roughly 5 percentage points over early 2015 levels. EMC’s revenue decline in the March quarter this year outpaced the industry-wide decline.

Correspondingly, EMC’s Information Storage revenues have declined as a proportion of EMC’s net revenues over the last few years. We forecast EMC’s information storage revenues to decline from 73% of net revenues in 2011 to about 66% in 2016 and subsequently to around 60% in 2021. Comparatively, VMware’s contribution to EMC’s top line and operating profits has increased over the past few years.

Read more: http://www.nasdaq.com/article/emc-earnings-preview-storage-hardware-sales-to-remain-suppressed-services-to-drive-growth-cm649970#ixzz4ElroHoEM

Read more

curata__TKwO8Qf3fYuWmG5.png

Cisco Mounting Storage Offensive With New SwiftStack Partnership And Turnkey Solution

| crn.com
curata__TKwO8Qf3fYuWmG5.png

With the Dell-EMC merger on the horizon, Cisco is integrating its Metapod OpenStack-based private cloud software with SwiftStack 4.0 – the vendor’s newest open-source Swift object platform featuring integrated load balancing and improved metadata searches. The alliance marks Cisco’s first object storage technology partnership.

SwiftStack is providing object storage technology for data-centric workloads that directly integrate with Cisco Metapod and Cisco’s Unified Computing System (UCS). For channel partners, the solution can be delivered either as a managed service with Cisco Metapod or self-managed.

Cisco said the solution aligns with new data storage architectures and usage models as organizations are consuming more data and new devices are becoming connected through the Internet of Things (IoT).

“Cisco is working with SwiftStack to provide enterprises with a low-risk solution for scaling compute and storage on-premises for unstructured data workloads, as well as new storage consumption models for the Internet of Things,” said Alan Waldman, vice president of product development at Cisco, in a blog post Tuesday.

Included in the new SwiftStack 4.0 is a series of data migration tools, the ability to integrate file and object storage, and an optional desktop client for Windows or Mac environments, allowing users to pull data from storage to their laptops for sync and share without the need for third-party applications.

In an interview with CRN in May, Cisco CEO Chuck Robbins said his strategy around storage is to form strategic partnerships with storage vendors, rather than through acquisitions in the space.

“Our commitment is still to these partnerships because I believe that’s what our customers would like to see us do,” said Robbins. “At any point in the future, if the customer feedback is that we need to do something differently, then we’ll take that assessment and look at it. But right now I think we’re very pleased and the partnership model seems to be working.”

 

Read more

curata__ivCPtomwP39kHCi.png

Amazon Simple Storage Service spurs on-premises storage

| searchstorage.techtarget.com
curata__ivCPtomwP39kHCi.png

Amazon Web Services celebrated its 10th anniversary in March of this year. As it closes in on becoming a $10 billion run rate per year enterprise IT juggernaut, the cloud computing platform has changed the IT landscape forever. Not only does Amazon Web Services (AWS) remain the largest infrastructure as a service/platform as a service (IaaS/PaaS) cloud platform on the market, its growth rate has been more than double that of some of the biggest IT companies in history.

In response to this success, major system vendors such as Cisco, Dell/EMC, HP and IBM are rapidly developing private cloud infrastructures as on-premises alternatives to AWS. The goal is to make infrastructure easier to utilize and manage for customers demanding cloud-like ease of use with public cloud-like prices.

Likewise, storage vendors must do their part to improve ease of use and lower costs. That’s because much of the success of AWS has been due to Amazon Simple Storage Service (S3) leading the way. S3 consists of a set of object storage API calls available via the public cloud that enable any application to store and retrieve data objects from AWS.

In order for Amazon Simple Storage Service to take off as an on-premises storage protocol, storage hardware and software vendors must continue to drive down the economic crossover point where the capital and operational costs of deploying S3-compatible on-premises storage are less expensive than AWS alternatives. That is, there will be a point, similar to the decision of whether to rent or own your home, at which it is less expensive to own your storage system.

The three critical factors that drive AWS S3 costs are retention time, frequency of access and quantity of data. S3 is very cost-effective for relatively static data, with prices ranging from $0.03-$0.07 per GB per month depending on the frequency of access and amount of data to be transferred. Where it gets expensive is when a customer exceeds limits on either the frequency of access or the amount of data transferred out of the cloud. On-premises S3-compatible vendors are claiming they can provide storage in the range of $0.01 per GB per month, but that is based on buying 75 TB upfront and amortizing that cost over three years. So it’ll be critical that these on-premises vendors continue to drive down the economic crossover point and also provide analytical tools so customers can easily evaluate when it is better to own versus rent storage.

On-premises S3-compatible storage won’t fully supplant cloud storage, of course, as there will always be use cases where cloud products are more attractive. These would include startup businesses without on-premises data centers, for example, or those organizations that need, but have not yet invested in globally distributed data centers.

 

Read more