curata__WlvD3H3qEzyulwo.png

Data Center Optimization: How to Do More Without More Money

| Data Center Knowledge
curata__WlvD3H3qEzyulwo.png

Data centers are pushing the boundaries of the possible, using new paradigms to operate efficiently in an environment that continually demands more power, more storage, more compute capacity… more everything. Operating efficiently and effectively in the land of “more” without more money requires increased data center optimization at all levels, including hardware and software, and even policies and procedures.

Although cloud computing, virtualization and hosted data centers are popular, most organizations still have at least part of their compute capacity in-house. According to a 451 Research survey of 1,200 IT professionals, 83 percent of North American enterprises maintain their own data centers. Only 17 percent have moved all IT operations to the cloud, and 49 percent use a hybrid model that integrates cloud or colocation hosts into their data center operations.

The same study says most data center budgets have remained stable, although the heavily regulated healthcare and finance sectors are increasing funding throughout data center operations. Among enterprises with growing budgets, most are investing in upgrades or retrofits to enable data center optimization and to support increased density.

As server density increases and the data center footprint shrinks, any gains may be taken up by the additional air handling and power equipment, including uninterruptable power supplies and power generators. In fact, data center energy usage is expected to increase by 81 percent by 2020, according to CIO magazine.

Often, identifying and decommissioning unused servers during a data center optimization project is a challenge, along with right-sizing provisioning.

Virtualization makes it easy to spin up resources as needed, but it also makes tracking those resources harder. The result is that unused servers may be running because no one is certain they’re not being used. A study by the Natural Resources Defense Council and Anthesis reports that up to 30 percent of servers are unused, but still running.

A similar principle extends to storage. While data deduplication (removing duplicate files) is widely used, over-crowded storage remains an issue for small to medium-sized enterprises (SMEs). Deduplication can free much-needed storage space.  For example, data deduplication along with compression can shrink data storage consumption by up to 85%.  This not only addresses the budget issues mentioned above but also helps with data density much like the server density mentioned earlier.  Imagine that you can save money with less storage and increase your data density at the same time .  Looks lie a win-win!

If data center optimization is concerned with saving money, managers also should examine their purchasing programs. NaviSite looked for cost efficiencies within volume projects and looked at large commodity items like cabinets, racks, cabling and plug strips eliminated middlemen whenever possible. For big purchases go directly to the manufacturers in China and seek innovative young technology vendors working with them to design specifications that significantly lower the price.

Data center optimization, clearly, extends beyond hardware to become a system-wide activity. It is the key to providing more power, more capacity and more storage without requiring more money.

* This article is quite long you may want to read the source article which can be found by clicking on the link below:

 

Read more

curata__JGRWVROnio08LCz.jpeg

Global Cloud Storage is grow at a CAGR of 25% by Forecast to 2023

| biotech.einnews.com
curata__JGRWVROnio08LCz.jpeg

In this rapidly changing world of technology, cloud storage market is gaining immense popularity owing to its ability to easily integrate with the already existing infrastructure of the enterprise. Cloud storage gateway solution provides features like encryption and data reduction technology with compression and data deduplication adds cost reduction and security to the data. It also allows rapid transfer of data to the cloud since the data is reduced and network traffic minimized.

As compared to other regions, the cloud storage market in North America is expected to witness a significantly healthy growth is accounted for the highest market share throughout the forecast period. U.S and Canada are anticipated to drive the growth owing to the presence of large number of established players of cloud storage solutions in the cloud storage market. In addition to this, the region also has a well-established infrastructure and higher internet penetration. Moreover, increasing adoption of cloud storage by small and medium enterprises is expected to be a major factor for the growth of cloud storage market.

The Cloud Storage Market is growing rapidly over 25% of CAGR and is expected to reach at approx. USD 104 billion by the end of forecast period.

Read more

curata__kNl6y8gsttGHArN.png

Federal Agencies Optimize Data Centers by Focusing on Storage using Data Reduction

| fedtechmagazine.com
curata__kNl6y8gsttGHArN.png

In data centers, like any piece of real estate, every square foot matters.

“Any way we can consolidate, save space and save electricity, it’s a plus,” says the State Department’s Mark Benjapathmongkol, a division chief of the agency’s Enterprise Server Operation Centers.

In searching out those advantages, the State Department has begun investing in solid-state drives (SSDs), which provide improved performance while occupying substantially less space in data centers.

In one case, IT leaders replaced a disk storage system with SSDs and gained almost three racks worth of space, Benjapathmongkol says. Because SSDs are smaller and denser than hard disk drives (HDDs), IT staff don’t need to deploy extra hardware to meet speed requirements, resulting in massive space and energy savings.

Options for Simplifying Storage Management

Agencies can choose from multiple technology options to more effectively and efficiently manage their storage, says Greg Schulz, founder of independent analyst firm Server StorageIO. These options include: SSDs and cloud storage; storage features such as deduplication and compression, which eliminate redundancies and store data using less storage; and thin provisioning, which better utilizes available space, Schulz says.

Consider the Defense Information Systems Agency. During the past year, the combat support agency has modernized its storage environment by investing in SSDs. Across DISA’s nine data centers, about 80 percent of information is stored on SSD arrays and 20 percent is running on HDDs, says Ryan Ashley, DISA’s chief of storage.

SSDs have allowed the agency to replace every four 42U racks with a single 42U rack, resulting in 75 percent savings in floor space as well as reduced power and cooling costs, he says.

Deduplication Creates Efficiencies

Besides space savings and the fact that SSDs are faster than HDDs, SSDs bring additional storage efficiencies. This includes new management software that automates tasks, such as the provisioning of storage when new servers and applications are installed, Ashley says.

The management software also allows DISA to centrally manage storage across every data center. In the past, the agency used between four to eight instances of management software in individual data centers.

“It streamlines and simplifies management,” Ashley says. Automatic provisioning reduces human error and ensures the agency follows best practices, while central management eliminates the need for the storage team to switch from tool to tool, he says.

DISA also has deployed deduplication techniques to eliminate storing redundant copies of data. IT leaders recently upgraded the agency’s backup technology from a tape system to a disk-based virtual tape library. This type of approach can accelerate backup and recovery and reduce the amount of hardware needed for storage.

It also can lead to significant savings because DISA keeps backups for several weeks, meaning it often owns multiple copies of the same data. But thanks to deduplication efforts, the agency can store more than 140 petabytes of backup data with 14PB of hardware.

“It was a huge amount of floor space that we opened up by removing thousands of tapes,” says Jonathan Kuharske, DISA’s deputy of computing ecosystem.

Categorize Data to Go Cloud First

To comply with the government’s “Cloud First” edict, USAID began migrating to cloud services, including infrastructure and software services, about seven years ago.

Previously, USAID managed its own data centers and tiered its storage. But the agency moved its data to cloud storage three years ago, Gowen says, allowing USAID to provide reliable, cost-effective IT services to its 12,000 employees across the world. The agency, which declined to offer specific return on investment data, currently uses a dozen cloud providers.

“We carefully categorize our data and find service providers that can meet those categories,” says Gowen, noting categories include availability and security. “They just take care of things at an affordable cost.”

For its public-facing websites, the agency uses a cloud provider that has a content distribution network and can scale to handle sudden spikes in traffic.

In late 2013, a typhoon lashed the Philippines, killing at least 10,000 people. In the days following the disaster, President Obama announced USAID sent supplies including food and emergency shelter. Because the president mentioned USAID, about 40 million people visited the agency’s website. If USAID had hosted its own site, it would have crashed. But the cloud service provider handled the traffic, Gowen says.

Our service provider can scale instantaneously to 40 million users, and when visitors drop off, we scale back,” he says. “It’s all handled.”

 

Such transitions are becoming commonplace. Improving storage management is a pillar of the government’s effort to optimize data centers. To meet requirements from the Federal Information Technology Acquisition Reform Act (FITARA), the Data Center Optimization Initiative requires agencies transition to cost-effective infrastructure.

While agencies are following different paths, the result is nearly identical: simpler and more efficient storage management, consolidation, increased reliability, improved service and cost savings. The U.S. Agency for International Development, for example, has committed to cloud storage.

“Our customers have different needs. The cloud allows us to focus on categorizing our data based on those needs like fast response times, reliability, availability and security,” says Lon Gowen, USAID’s chief strategist and special advisor to the CIO. “We find the service providers that meet those category requirements, and then we let the service providers focus on the details of the technology.”

To read the complete article click on the link below;

 

Read more

DR Journal

Cloud Economics drive the IT Infrastructure of Tomorrow

| Welcome to Disaster Recovery Journal
DR Journal

The cloud continues to dominate IT as businesses make their infrastructure decisions based on cost and agility. Public cloud, where shared infrastructure is paid for and utilized only when needed, is the most popular model today. However, more and more organizations are addressing security concerns by creating their own private clouds. As businesses deploy private cloud infrastructure, they are adopting techniques used in the public cloud to control costs. Gone are the traditional arrays and network switches of the past, replaced with software-defined data centers running on industry standard servers.

Efficiency features make the cloud model more effective by reducing costs and increasing data transfer speeds. One such feature, which is particularly effective in cloud environments is inline data reduction. This is a technology that can be used to lower the costs of data in flight and at rest. In fact, data reduction delivers unique benefits to each of the cloud deployment models.

Public Clouds

The public cloud’s raison d’etre is its ability to deliver IT business agility, deployment flexibility and elasticity. As a result, new workloads are increasingly deployed in public clouds.  Worldwide public IT cloud service revenue in 2018 is predicted to be $127B.  

Data reduction technology minimizes public cloud costs. For example, deduplication and compression typically cut capacity requirements of block storage in enterprise public cloud deployments by up to 6:1.  These savings are realized in reduced storage consumption and operating costs in public cloud deployments.   

Consider AWS costs employing data reduction;

If you provision a 300 TB of EBS General Purpose SSD (gp2) storage for 12 hours per day over a 30 day month in a region that charges $0.10 per GB-month, you would be charged $15,000 for the storage.

With data reduction, that monthly cost of $15,000 would be reduced to $2,500.  Over a 12 month period you will save $150,000.   Capacity planning is a simpler problem when it is 1/6th its former size.  Bottom line, data reduction increases agility and reduces costs of public clouds.

One data reduction application that can readily be applied in public cloud is Permabit’s Virtual Disk Optimizer (VDO) which is a pre-packaged software solution that installs and deploys in minutes on Red Hat Enterprise Linux and Ubuntu LTS Linux distributions. To deploy VDO in Amazon AWS, the administrator provisions Elastic Block Storage (EBS) volumes, installs the VDO package into their VMs and applies VDO to the block devices represented for their EBS volumes.  Since VDO is implemented in the Linux device mapper, it is transparent to the applications installed above it.

As data is written out to block storage volumes, VDO applies three reduction techniques:

  1. Zero-block elimination uses pattern matching techniques to eliminate 4 KB zero blocks

  2. Inline Deduplication eliminates 4 KB duplicate blocks

  3. HIOPS Compression™ compresses remaining blocks 

cloud1

This approach results in remarkable 6:1 data reduction rates across a wide range of data sets. 

Private Cloud

Organizations see similar benefits when they deploy data reduction in their private cloud environments. Private cloud deployments are selected over public because they offer the increased flexibility of the public cloud model but keep privacy and security under their own control. IDCpredicts in 2017 $17.2B in infrastructure spending for private cloud, including on-premises and hosted private clouds.

One problem that data reduction addresses for the private cloud is that, when implementing private cloud, organizations can get hit with the double whammy of hardware infrastructure costs plus annual software licensing costs. For example, Software Defined Storage (SDS) solutions are typically licensed by capacity and their costs are directly proportional to hardware infrastructure storage expenses. Data reduction decreases storage costs because it reduces storage capacity consumption. For example, deduplication and compression typically cut capacity requirements of block storage in enterprise deployments by up to 6:1 or approximately 85%.

Consider a private cloud configuration with a 1 PB deployment of storage infrastructure and SDS. Assuming a current hardware cost of $500 per TB for commodity server-based storage infrastructure with datacenter-class SSDs and a cost of $56,000 per 512 TB for the SDS component, users would pay $612,000 in the first year. In addition, software subscriptions are annual, over three years you will spend $836,000 for 1 PB of storage and over five years, $1,060,000.

The same configuration with 6:1 data reduction in comparison over five years will cost $176,667 for hardware and software resulting in $883,333 in savings. And that’s not including the additional substantial savings in power cooling and space. As businesses develop private cloud deployments, they must be sure it has data reduction capabilities because the cost savings are compelling.

When implementing private cloud on Linux, the easiest way to include data reduction is with Permabit Virtual Data Optimizer (VDO). VDO operates in the Linux kernel as one of many core data management services and is a device mapper target driver transparent to persistent and ephemeral storage services whether the storage layers above are providing object, block, compute, or file based access.

VDO – Seamless and Transparent Data Reduction

cloud2

The same transparency applies to the applications running above the storage service level. Customers using VDO today realize savings up to 6:1 across a wide range of use cases.

Some workflows that benefit heavily from data reduction are;

  • Logging: messaging, events, system and application logs

  • Monitoring: alerting, and tracing systems

  • Database: databases with textual content, NOSQL approaches such as MongoDB and Hadoop

  • User Data: home directories, development build environments

  • Virtualization and containers: virtual server, VDI, and container system image storage

  • Live system backups: used for rapid disaster recovery

With data reduction, cumulative cost savings can be achieved across a wide range of use cases which makes data reduction so attractive for private cloud deployments.

Reducing Hybrid Cloud’s Highly Redundant Data

Storage is at the foundation of cloud services and almost universally data in the cloud must be replicated for data safety. Hybrid cloud architectures that combine on-premise resources (private cloud) with colocation, private and multiple public clouds result in highly redundant data environments. IDC’s FutureScape report finds “Over 80% of enterprise IT organizations will commit to hybrid cloud architectures, encompassing multiple public cloud services, as well as private clouds by the end of 2017.” (IDC 259840)

Depending on a single cloud storage provider for storage services can risk SLA targets. Consider the widespread AWS S3 storage errors that occurred on February 28th 2017, where data was not available to clients for several hours. Because of loss of data access businesses may have lost millions of dollars of revenue. As a result today more enterprises are pursuing a “Cloud of Clouds” approach where data is redundantly distributed across multiple clouds for data safety and accessibility. But unfortunately, because of the data redundancy, this approach increases storage capacity consumption and cost.

That’s where data reduction comes in. In hybrid cloud deployments where data is replicated to the participating clouds, data reduction multiplies capacity and cost savings. If 3 copies of the data are kept in 3 different clouds, 3 times as much is saved. Take the private cloud example above where data reduction drove down the costs of a 1 PB deployment to $176,667, resulting in $883,333 in savings over five years. If that PB is replicated in 3 different clouds, the savings would be multiplied by 3 for a total savings of $2,649,999.

Permabit’s Virtual Data Optimizer (VDO) provides the perfect solution to address the multi-site storage capacity and bandwidth challenges faced in hybrid cloud environments. Its advanced data reduction capabilities have the same impact on bandwidth consumption as they do on storage and translates to a 6X reduction in network bandwidth consumption and associated cost.  Because VDO operates at the device level, it can sit above block-level replication products to optimize data before data is written out and replicated.

Summary

IT professionals are finding that the future of IT infrastructure lies in the cloud. Data reduction technologies enable clouds – public, private and hybrid to deliver on their promise of safety, agility and elasticity at the lowest possible cost making cloud the deployment model of choice for IT infrastructure going forward.”

Read more

curata__1MK8EV7S10cRfWt.png

Cloud Economics drive the IT Infrastructure of Tomorrow

| ITBusinessNet.com
curata__1MK8EV7S10cRfWt.png

Cloud Economics drive the IT Infrastructure of Tomorrow

The cloud continues to dominate IT as businesses make their infrastructure decisions based on cost and agility. Public cloud, where shared infrastructure is paid for and utilized only when needed, is the most popular model today. However, more and more organizations are addressing security concerns by creating their own private clouds. As businesses deploy private cloud infrastructure they are adopting techniques used in the public cloud to control costs. Gone are the traditional arrays and network switches of the past, replaced with software-defined data centers running on industry standard servers.

Efficiency features make the cloud model more effective by reducing costs and increasing data transfer speeds. One such feature, which is particularly effective in cloud environments, is inline data reduction. This is a technology that can be used to lower the costs of data in flight and at rest. In fact, data reduction delivers unique benefits to each of the cloud deployment models.

Public Clouds

The public cloud’s raison d’etre is its ability to deliver IT business agility, deployment flexibility and elasticity. As a result new workloads are increasingly deployed in public clouds.  Worldwide public IT cloud service revenue in 2018 is predicted to be $127B.

Data reduction technology minimizes public cloud costs. For example, deduplication and compression typically cut capacity requirements of block storage in enterprise public cloud deployments by up to 6:1.  These savings are realized in reduced storage consumption and operating costs in public cloud deployments.

Consider AWS costs employing data reduction;

If you provision a 300TB of EBS General Purpose SSD (gp2) storage for 12 hours per day over a 30 day month in a region that charges $0.10 per GB-month, you would be charged $15,000 for the storage.

With data reduction, that monthly cost of $15,000 would be reduced to $2,500.  Over a 12 month period you will save $150,000.   Capacity planning is a simpler problem when it is 1/6th its former size.  Bottom line, data reduction increases agility and reduces costs of public clouds.

One data reduction application that can readily be applied in public cloud is Permabit’s Virtual Disk Optimizer (VDO) which is a pre-packaged software solution that installs and deploys in minutes on Red Hat Enterprise Linux and Ubuntu LTS Linux distributions.  To deploy VDO in Amazon AWS, the administrator provisions Elastic Block Storage (EBS) volumes, installs the VDO package into their VMs and applies VDO to the block devices represented for their EBS volumes.  Since VDO is implemented in the Linux device mapper, it is transparent to the applications installed above it.

To READ the complete article;

CLICK ON THE LINK BELOW

Read more

curata__72Bb2tTnqExjWmD.jpeg

Using Data Reduction at the OS layer in Enterprise Linux Environments

| Stock Market
curata__72Bb2tTnqExjWmD.jpeg

Enterprises and cloud service providers that have built their infrastructure around Linux should deploy data reduction in the operating system to drive costs down, say experts at Permabit Technology Corporation, the company behind Permabit Virtual Data Optimizer (VDO).  Permabit VDO is the only complete data reduction software for Linux, the world’s most popular server Operating System (OS). Permabit’s VDO software fills a gap in the Linux feature set by providing a cost effective, alternative to the data reduction services delivered as part of the two other major OS platforms – Microsoft Windows and VMware. IT architects are driven to cut costs as they build out their next generation infrastructure with one or more of these OS platforms in  public and/or private cloud deployments and one obvious way to do so is with data reduction.

When employed as a component of the OS, data reduction can be applied universally without lock-in of proprietary solutions. Adding compression, deduplication, and thin provisioning to the core OS, data reduction benefits can be leveraged by any application or infrastructure services running on that OS. This ensures that savings accrue across the entire IT infrastructure, delivering TCO advantages no matter where the data resides. This is the future of data reduction – as a ubiquitous service of the OS.

“We’re seeing movement away from proprietary storage solutions, where data reduction was a key differentiated feature, toward OS-based capabilities that are applied across an entire infrastructure,” said Tom Cook, Permabit CEO.  “Early adopters are reaping financial rewards through reduced cost of equipment, space, power and cooling. Today we are also seeing adoption of data reduction in the OS by more conservative IT organizations who are driven to take on more initiatives with tightly constrained IT budgets.”

VDO, with inline data deduplication, HIOPS Compression®, and fine-grained thin provisioning, is deployed as a device-mapper driver for Linux. This approach ensures compatibility with a full complement of direct-attached/ephemeral, block, file and object interfaces. VDO data reduction is available for Red Hat Enterprise Linux and Canonical Ubuntu Linux LTS distributions.

Advantages of in-OS data reduction technology include:

  • Improved density for public/private /hybrid cloud storage, resulting in lower storage and service costs
  • Vendor independent to function across hardware running the target OS
  • Seamless data mobility between on-premise and cloud resources
  • Up to six times lower IT infrastructure OpEx
  • Transparent to end users accessing data
  • Requires no modifications to existing applications, file systems, virtualization features, or data protection capabilities

With VDO, these advantages are being realized on Linux today. VDO deployments have been completed (or are currently in progress) with large telecommunications companies, government agencies, financial services firms and IaaS providers who have standardized on Linux for their data centers. With data reduction in Linux, enterprises achieve vendor independence across all Linux based storage, increased mobility of reduced data and hyper scale economics. What an unbeatable combination!

Read more

curata__1MK8EV7S10cRfWt.png

Reduce Cloud’s Highly Redundant Data

| By: (61)

Storage is the foundation of cloud services. All cloud services – delineated as scalable, elastic, on-demand, and self-service – begin with storage. Almost universally, cloud storage services are virtualized and hybrid cloud architectures that combine on-premise resources with colocation, private and public clouds result in highly redundant data environments.  IDC’s FutureScape report finds “Over 80% of enterprise IT organizations will commit to hybrid cloud architectures, encompassing multiple public cloud services,…

Read more

Hybrid Cloud

Hybrid Cloud Gains in Popularity, Survey Finds

| Light Reading
Hybrid Cloud

The hybrid model of cloud computing is gaining more popularity in the enterprise, as businesses move more workloads and applications to public cloud infrastructures and away from private deployments.

Those are some of the findings from RightScale’s annual “State of the Cloud” report, which the company released Wednesday. It’s based on interviews with 1,000 IT professionals, with 48% of them working in companies with more than 1,000 employees.

The biggest takeaway from the report is that enterprises and their IT departments are splitting their cloud dollars between public and private deployments, and creating demands for a hybrid approach.

“The 2017 State of the Cloud Survey shows that while hybrid cloud remains the preferred enterprise strategy, public cloud adoption is growing while private cloud adoption flattened and fewer companies are prioritizing building a private cloud,” according to a blog post accompanying the report. “This was a change from last year’s survey, where we saw strong gains in private cloud use.”

Specifically, 85% of respondents reported having a multi-cloud, hybrid strategy, and that’s up from the 82% who reported a similar approach in 2016. At the same time, private cloud adoption dropped from 77% in 2016 to 72% in 2017.

In the survey, 41% of respondents reported running workloads in public clouds, while 38% said they run workloads in private clouds. In large enterprises, those numbers reverse, with 32% of respondents running workloads in public clouds, and 43% running workloads within private infrastructures.

“It’s important to note that the workloads running in private cloud may include workloads running in existing virtualized environments or bare-metal environments that have been ‘cloudified,’ ” according to the report.

When it comes to adopting cloud technologies and services, there are less barriers and concerns this year compared to 2016. The lack of resources and expertise to implement a cloud strategy was still the top concern.

In addition the report notes that in every cloud expertise level the Top 5 Challenges” indicate there is a substantial concern with “managing costs”.  One vehicle that can help manage costs is to apply data reduction technologies to your cloud deployment. Permabit VDO can be applied to public and/or private clouds quickly and easily enabling cost reduction of 50% or more in on-premise, in-transit and public cloud deployments.

Read more

curata__NiNuor421kyqGol.png

Why Deduplication Matters for Cloud Storage

| dzone.com
curata__NiNuor421kyqGol.png

Most people assume cloud storage is cheaper than on-premise storage. After all, why wouldn’t they? You can rent object storage for $276 per TB per year or less, depending on your performance and access requirements. Enterprise storage costs between $2,500 to $4,000 per TB per year, according to analysts at Gartner and ESG.

This comparison makes sense for primary data, but what happens when you make backups or copies of data for other reasons in the cloud? Imagine that an enterprise needs to retain 3 years of monthly backups of a 100TB data set. In the cloud, this can be easily equated to 3.6 PB of raw backup data, or a monthly bill of over $83,000. That’s about $1 million a year before you even factor in and data access or retrieval charges.

That is precisely why efficient deduplication is hugely important for both on-premise and cloud storage, especially when enterprises want to retain their secondary data (backup, archival, long-term retention) for weeks, months, and years. Cloud storage costs can add up quickly, surprising even astute IT professionals, especially as data sizes get bigger with web-scale architectures, data gets replicated and they discover it can’t be deduplicated in the cloud.

The Promise of Cloud Storage: Cheap, Scalable, Forever Available

Cloud storage is viewed as cheap, reliable and infinitely scalable – which is generally true. Object storage like AWS S3 is available at just $23/TB per month for the standard tier, or $12.50/TB for the Infrequent Access tier. Many modern applications can take advantage of object storage. Cloud providers offer their own file or block options, such as AWS EBS (Elastic Block Storage) that starts at $100/TB per month, prorated hourly. Third-party solutions also exist that connect traditional file or block storage to object storage as a back-end.

Even AWS EBS, at $1,200/TB per year, compares favorably to on-premise solutions that cost 2-3 times as much, and require high upfront capital expenditures. To recap, enterprises are gravitating to the cloud because the OPEX costs are significantly lower, there’s minimal up-front cost, and you pay as you go (vs. traditional storage where you have to buy far ahead of actual need)

How Cloud Storage Costs Can Get Out of Hand: Copies, Copies Everywhere

The direct cost comparison between cloud storage and traditional on-premise storage can distract from managing storage costs in the cloud, particularly as more and more data and applications move there. There are three components to cloud storage costs to consider:

  • Cost for storing the primary data, either on object or block storage
  • Cost for any copies, snapshots, backups, or archive copies of data
  • Transfer charges for data

We’ve covered the first one. Let’s look at the other two.

Copies of data. It’s not how much data you put into the cloud — uploading data is free, and storing a single copy is cheap. It’s when you start making multiple copies of data — for backups, archives, or any other reason — that costs spiral if you’re not careful. Even if you don’t make actual copies of the data, applications or databases often have built-in data redundancy and replicate data (or in database parlance, a Replication Factor).

In the cloud, each copy you make of an object incurs the same cost as the original. Cloud providers may do some dedupe or compression behind the scenes, but this isn’t generally credited back to the customer. For example, in a consumer cloud storage service like DropBox, if you make a copy or ten copies of a file, each copy counts against your storage quota.

For enterprises, this means data snapshots, backups, and archived data all incur additional costs. As an example, AWS EBS charges $0.05/GB per month for storing snapshots. While the snapshots are compressed and only store incremental data, they’re not deduplicated. Storing a snapshot of that 100 TB dataset could cost $60,000 per year, and that’s assuming it doesn’t grow at all.

Data access. Public cloud providers generally charge for data transfer either between cloud regions or out of the cloud. For example, moving or copying a TB of AWS S3 data between Amazon regions costs $20, and transferring a TB of data out to the internet costs $90. Combined with GET, PUT, POST, LIST and DELETE request charges, data access costs can really add up.

Why Deduplication in the Cloud Matters

Cloud applications are distributed by design and are deployed on non-relational massively scalable databases as a standard. In non-relational databases, most data is redundant before you even make a copy. There are common blocks, objects, and databases like MongoDB or Cassandra have replication factor (RF) of 3 to ensure data integrity in a distributed cluster, so you start out with three copies.

Backups or secondary copies are usually created and maintained via snapshots (for example, using EBS snapshots as noted earlier). The database architecture means that when you take a snapshot, you’re really making three copies of the data. Without any deduplication, this gets really expensive.

Today there are solutions to solve the public cloud deduplication or data reduction conundrum. Permabit VDO can be easily deployed in public and/or private cloud solutions  Take a look at the following blog from Tom Cook http://permabit.com/data-efficiency-in-public-clouds/ or for the technical details look at one from Louis Imershein http://permabit.com/effective-use-of-data-reduction-in-the-public-cloud/. Both provide examples and details on why and how to drive deduplication and compression solutions in a public cloud.

 

 

Read more

curata__sgU8qd9uQLE3KYJ.jpeg

Why 2017 will belong to open source

| CIO News
curata__sgU8qd9uQLE3KYJ.jpeg

A few years ago, open source was the less-glamorous and low-cost alternative in the enterprise world, and no one would have taken the trouble to predict what its future could look like. Fast-forward to 2016, many of us will be amazed by how open source has become the de facto standard for nearly everything inside an enterprise. Open source today is the primary engine for innovation and business transformation. Cost is probably the last reason for an organisation to go in for open source.

An exclusive market study conducted by North Bridge and Black Duck brought some fascinating statistics a few months ago. In the study titled “Future of Open Source”, about 90% of surveyed organisations said that open source improves efficiency, interoperability and innovation. What is even more significant is the finding that the adoption of open source for production environments outpaced the proprietary software for the first time – more than 55% leverage OSS for production infrastructure.

OpenStack will rule the cloud world
OpenStack has already made its presence felt as an enterprise-class framework for the cloud. An independent study, commissioned by SUSE, reveals that 81 percent of senior IT professionals are planning to move or are already moving to OpenStack Private Cloud. What is more, the most innovative businesses and many Fortune 100 businesses have already adopted OpenStack for their production environment.

As cloud becomes the foundation on which your future business will be run, OpenStack gains the upper hand with its flexibility, agility, performance and efficiency. Significant cost reduction is another major consideration for organisations, especially the large enterprises. Because a proprietary cloud platform is excessively expensive to build and maintain and operations of Open Stack deliver baseline cost reductions. In addition data reduction in an Open Stack deployment can further reduce operating costs.

Open source to be at the core of digital transformation
Digital transformation is, in fact, one of the biggest headaches for CIOs because of its sheer heterogeneous and all-pervading nature. With the data at the center of digital transformation, it is often impossible for CIOs to ensure that the information that percolates down is insightful and secure at the same time. They need a platform which is scalable, flexible, allows innovations and is quick enough to turn around. This is exactly what Open Source promises. Not just that, with the current heterogeneous environments that exist in enterprises, interoperability is going to be the most critical factor.

Technologies like Internet of Things (IoT) and SMAC (social, mobile, analytics and cloud) will make data more valuable and voluminous. The diversity of devices and standards that will emerge will make open source a great fit for enterprises to truly leverage these trends. It is surprising to know that almost all ‘digital enterprises’ in the world are already using open source platforms and tools to a great extent. The pace of innovation that open source communities can bring to the table is unprecedented.

Open source-defined data centers
A recent research paper from IDC states that 85 percent of the surveyed enterprises globally consider open source to be the realistic or preferred solution for migrating to software-defined infrastructure (SDI). IDC also recommends to avoiding vendor lock-in by deploying open source solutions. Interestingly, many organisations seem to have already understood the benefits of open source clearly, with Linux adoption in the data centers growing steadily at a pace of 15-20%.

The key drivers of SDI – efficiency, scalability and reliability at minimal investment – can be achieved only with the adoption of open source platforms. Open source helps the enterprises to be agiler in building, deploying and maintaining applications. In the coming days, open source adoption is going to be essential for achieving true ‘zero-downtime’ in Software-Defined-Infrastructure.

The open source will have specifically large role to play in the software-defined-storage (SDS) space. It will help organisations in overcoming the current challenges associated with SDS. Open SDS solutions can scale infinitely without a need to refresh the entire platform or disrupt the existing functioning environment.

Data Reduction will easily be added to SDS or OS environments with Permabit VDO. A simple plug and play approach that will enable 2X or more storage reduction will add to the already efficient operations of open source deployments.

Open source to be decisive in enterprise DevOps journey
Today, software and applications have a direct impact on business success and performance. As a reason, development, testing, delivery, and maintenance of applications have become very crucial. In the customer-driven economy, it is imperative for organisations to have DevOps and containerisation technologies to increase release cycles and quality of applications.

Often, enterprises struggle to get the most out of DevOps model. The investment associated with replicating the production environments for testing the apps is not negligible. They also fail to ensure that the existing systems are not disturbed while running a testing environment within containers.

Industry analysts believe that microservices running in Docker-like containers, on an open and scalable cloud infrastructure are the future of applications. OpenStack-based cloud infrastructures are going to be an absolute necessity for enterprises for a successful DevOp journey. The flexibility and interoperability apart, the open cloud allows the DevOps team to reuse the same infrastructure as and when containers are created.

In 2017, it is expected to see open source becoming the first preference for organisations that are at the forefront of innovation.

Read more