curata__kNl6y8gsttGHArN.png

Federal Agencies Optimize Data Centers by Focusing on Storage using Data Reduction

| fedtechmagazine.com
curata__kNl6y8gsttGHArN.png

In data centers, like any piece of real estate, every square foot matters.

“Any way we can consolidate, save space and save electricity, it’s a plus,” says the State Department’s Mark Benjapathmongkol, a division chief of the agency’s Enterprise Server Operation Centers.

In searching out those advantages, the State Department has begun investing in solid-state drives (SSDs), which provide improved performance while occupying substantially less space in data centers.

In one case, IT leaders replaced a disk storage system with SSDs and gained almost three racks worth of space, Benjapathmongkol says. Because SSDs are smaller and denser than hard disk drives (HDDs), IT staff don’t need to deploy extra hardware to meet speed requirements, resulting in massive space and energy savings.

Options for Simplifying Storage Management

Agencies can choose from multiple technology options to more effectively and efficiently manage their storage, says Greg Schulz, founder of independent analyst firm Server StorageIO. These options include: SSDs and cloud storage; storage features such as deduplication and compression, which eliminate redundancies and store data using less storage; and thin provisioning, which better utilizes available space, Schulz says.

Consider the Defense Information Systems Agency. During the past year, the combat support agency has modernized its storage environment by investing in SSDs. Across DISA’s nine data centers, about 80 percent of information is stored on SSD arrays and 20 percent is running on HDDs, says Ryan Ashley, DISA’s chief of storage.

SSDs have allowed the agency to replace every four 42U racks with a single 42U rack, resulting in 75 percent savings in floor space as well as reduced power and cooling costs, he says.

Deduplication Creates Efficiencies

Besides space savings and the fact that SSDs are faster than HDDs, SSDs bring additional storage efficiencies. This includes new management software that automates tasks, such as the provisioning of storage when new servers and applications are installed, Ashley says.

The management software also allows DISA to centrally manage storage across every data center. In the past, the agency used between four to eight instances of management software in individual data centers.

“It streamlines and simplifies management,” Ashley says. Automatic provisioning reduces human error and ensures the agency follows best practices, while central management eliminates the need for the storage team to switch from tool to tool, he says.

DISA also has deployed deduplication techniques to eliminate storing redundant copies of data. IT leaders recently upgraded the agency’s backup technology from a tape system to a disk-based virtual tape library. This type of approach can accelerate backup and recovery and reduce the amount of hardware needed for storage.

It also can lead to significant savings because DISA keeps backups for several weeks, meaning it often owns multiple copies of the same data. But thanks to deduplication efforts, the agency can store more than 140 petabytes of backup data with 14PB of hardware.

“It was a huge amount of floor space that we opened up by removing thousands of tapes,” says Jonathan Kuharske, DISA’s deputy of computing ecosystem.

Categorize Data to Go Cloud First

To comply with the government’s “Cloud First” edict, USAID began migrating to cloud services, including infrastructure and software services, about seven years ago.

Previously, USAID managed its own data centers and tiered its storage. But the agency moved its data to cloud storage three years ago, Gowen says, allowing USAID to provide reliable, cost-effective IT services to its 12,000 employees across the world. The agency, which declined to offer specific return on investment data, currently uses a dozen cloud providers.

“We carefully categorize our data and find service providers that can meet those categories,” says Gowen, noting categories include availability and security. “They just take care of things at an affordable cost.”

For its public-facing websites, the agency uses a cloud provider that has a content distribution network and can scale to handle sudden spikes in traffic.

In late 2013, a typhoon lashed the Philippines, killing at least 10,000 people. In the days following the disaster, President Obama announced USAID sent supplies including food and emergency shelter. Because the president mentioned USAID, about 40 million people visited the agency’s website. If USAID had hosted its own site, it would have crashed. But the cloud service provider handled the traffic, Gowen says.

Our service provider can scale instantaneously to 40 million users, and when visitors drop off, we scale back,” he says. “It’s all handled.”

 

Such transitions are becoming commonplace. Improving storage management is a pillar of the government’s effort to optimize data centers. To meet requirements from the Federal Information Technology Acquisition Reform Act (FITARA), the Data Center Optimization Initiative requires agencies transition to cost-effective infrastructure.

While agencies are following different paths, the result is nearly identical: simpler and more efficient storage management, consolidation, increased reliability, improved service and cost savings. The U.S. Agency for International Development, for example, has committed to cloud storage.

“Our customers have different needs. The cloud allows us to focus on categorizing our data based on those needs like fast response times, reliability, availability and security,” says Lon Gowen, USAID’s chief strategist and special advisor to the CIO. “We find the service providers that meet those category requirements, and then we let the service providers focus on the details of the technology.”

To read the complete article click on the link below;

 

Read more

DR Journal

Cloud Economics drive the IT Infrastructure of Tomorrow

| Welcome to Disaster Recovery Journal
DR Journal

The cloud continues to dominate IT as businesses make their infrastructure decisions based on cost and agility. Public cloud, where shared infrastructure is paid for and utilized only when needed, is the most popular model today. However, more and more organizations are addressing security concerns by creating their own private clouds. As businesses deploy private cloud infrastructure, they are adopting techniques used in the public cloud to control costs. Gone are the traditional arrays and network switches of the past, replaced with software-defined data centers running on industry standard servers.

Efficiency features make the cloud model more effective by reducing costs and increasing data transfer speeds. One such feature, which is particularly effective in cloud environments is inline data reduction. This is a technology that can be used to lower the costs of data in flight and at rest. In fact, data reduction delivers unique benefits to each of the cloud deployment models.

Public Clouds

The public cloud’s raison d’etre is its ability to deliver IT business agility, deployment flexibility and elasticity. As a result, new workloads are increasingly deployed in public clouds.  Worldwide public IT cloud service revenue in 2018 is predicted to be $127B.  

Data reduction technology minimizes public cloud costs. For example, deduplication and compression typically cut capacity requirements of block storage in enterprise public cloud deployments by up to 6:1.  These savings are realized in reduced storage consumption and operating costs in public cloud deployments.   

Consider AWS costs employing data reduction;

If you provision a 300 TB of EBS General Purpose SSD (gp2) storage for 12 hours per day over a 30 day month in a region that charges $0.10 per GB-month, you would be charged $15,000 for the storage.

With data reduction, that monthly cost of $15,000 would be reduced to $2,500.  Over a 12 month period you will save $150,000.   Capacity planning is a simpler problem when it is 1/6th its former size.  Bottom line, data reduction increases agility and reduces costs of public clouds.

One data reduction application that can readily be applied in public cloud is Permabit’s Virtual Disk Optimizer (VDO) which is a pre-packaged software solution that installs and deploys in minutes on Red Hat Enterprise Linux and Ubuntu LTS Linux distributions. To deploy VDO in Amazon AWS, the administrator provisions Elastic Block Storage (EBS) volumes, installs the VDO package into their VMs and applies VDO to the block devices represented for their EBS volumes.  Since VDO is implemented in the Linux device mapper, it is transparent to the applications installed above it.

As data is written out to block storage volumes, VDO applies three reduction techniques:

  1. Zero-block elimination uses pattern matching techniques to eliminate 4 KB zero blocks

  2. Inline Deduplication eliminates 4 KB duplicate blocks

  3. HIOPS Compression™ compresses remaining blocks 

cloud1

This approach results in remarkable 6:1 data reduction rates across a wide range of data sets. 

Private Cloud

Organizations see similar benefits when they deploy data reduction in their private cloud environments. Private cloud deployments are selected over public because they offer the increased flexibility of the public cloud model but keep privacy and security under their own control. IDCpredicts in 2017 $17.2B in infrastructure spending for private cloud, including on-premises and hosted private clouds.

One problem that data reduction addresses for the private cloud is that, when implementing private cloud, organizations can get hit with the double whammy of hardware infrastructure costs plus annual software licensing costs. For example, Software Defined Storage (SDS) solutions are typically licensed by capacity and their costs are directly proportional to hardware infrastructure storage expenses. Data reduction decreases storage costs because it reduces storage capacity consumption. For example, deduplication and compression typically cut capacity requirements of block storage in enterprise deployments by up to 6:1 or approximately 85%.

Consider a private cloud configuration with a 1 PB deployment of storage infrastructure and SDS. Assuming a current hardware cost of $500 per TB for commodity server-based storage infrastructure with datacenter-class SSDs and a cost of $56,000 per 512 TB for the SDS component, users would pay $612,000 in the first year. In addition, software subscriptions are annual, over three years you will spend $836,000 for 1 PB of storage and over five years, $1,060,000.

The same configuration with 6:1 data reduction in comparison over five years will cost $176,667 for hardware and software resulting in $883,333 in savings. And that’s not including the additional substantial savings in power cooling and space. As businesses develop private cloud deployments, they must be sure it has data reduction capabilities because the cost savings are compelling.

When implementing private cloud on Linux, the easiest way to include data reduction is with Permabit Virtual Data Optimizer (VDO). VDO operates in the Linux kernel as one of many core data management services and is a device mapper target driver transparent to persistent and ephemeral storage services whether the storage layers above are providing object, block, compute, or file based access.

VDO – Seamless and Transparent Data Reduction

cloud2

The same transparency applies to the applications running above the storage service level. Customers using VDO today realize savings up to 6:1 across a wide range of use cases.

Some workflows that benefit heavily from data reduction are;

  • Logging: messaging, events, system and application logs

  • Monitoring: alerting, and tracing systems

  • Database: databases with textual content, NOSQL approaches such as MongoDB and Hadoop

  • User Data: home directories, development build environments

  • Virtualization and containers: virtual server, VDI, and container system image storage

  • Live system backups: used for rapid disaster recovery

With data reduction, cumulative cost savings can be achieved across a wide range of use cases which makes data reduction so attractive for private cloud deployments.

Reducing Hybrid Cloud’s Highly Redundant Data

Storage is at the foundation of cloud services and almost universally data in the cloud must be replicated for data safety. Hybrid cloud architectures that combine on-premise resources (private cloud) with colocation, private and multiple public clouds result in highly redundant data environments. IDC’s FutureScape report finds “Over 80% of enterprise IT organizations will commit to hybrid cloud architectures, encompassing multiple public cloud services, as well as private clouds by the end of 2017.” (IDC 259840)

Depending on a single cloud storage provider for storage services can risk SLA targets. Consider the widespread AWS S3 storage errors that occurred on February 28th 2017, where data was not available to clients for several hours. Because of loss of data access businesses may have lost millions of dollars of revenue. As a result today more enterprises are pursuing a “Cloud of Clouds” approach where data is redundantly distributed across multiple clouds for data safety and accessibility. But unfortunately, because of the data redundancy, this approach increases storage capacity consumption and cost.

That’s where data reduction comes in. In hybrid cloud deployments where data is replicated to the participating clouds, data reduction multiplies capacity and cost savings. If 3 copies of the data are kept in 3 different clouds, 3 times as much is saved. Take the private cloud example above where data reduction drove down the costs of a 1 PB deployment to $176,667, resulting in $883,333 in savings over five years. If that PB is replicated in 3 different clouds, the savings would be multiplied by 3 for a total savings of $2,649,999.

Permabit’s Virtual Data Optimizer (VDO) provides the perfect solution to address the multi-site storage capacity and bandwidth challenges faced in hybrid cloud environments. Its advanced data reduction capabilities have the same impact on bandwidth consumption as they do on storage and translates to a 6X reduction in network bandwidth consumption and associated cost.  Because VDO operates at the device level, it can sit above block-level replication products to optimize data before data is written out and replicated.

Summary

IT professionals are finding that the future of IT infrastructure lies in the cloud. Data reduction technologies enable clouds – public, private and hybrid to deliver on their promise of safety, agility and elasticity at the lowest possible cost making cloud the deployment model of choice for IT infrastructure going forward.”

Read more

curata__1MK8EV7S10cRfWt.png

Cloud Economics drive the IT Infrastructure of Tomorrow

| ITBusinessNet.com
curata__1MK8EV7S10cRfWt.png

Cloud Economics drive the IT Infrastructure of Tomorrow

The cloud continues to dominate IT as businesses make their infrastructure decisions based on cost and agility. Public cloud, where shared infrastructure is paid for and utilized only when needed, is the most popular model today. However, more and more organizations are addressing security concerns by creating their own private clouds. As businesses deploy private cloud infrastructure they are adopting techniques used in the public cloud to control costs. Gone are the traditional arrays and network switches of the past, replaced with software-defined data centers running on industry standard servers.

Efficiency features make the cloud model more effective by reducing costs and increasing data transfer speeds. One such feature, which is particularly effective in cloud environments, is inline data reduction. This is a technology that can be used to lower the costs of data in flight and at rest. In fact, data reduction delivers unique benefits to each of the cloud deployment models.

Public Clouds

The public cloud’s raison d’etre is its ability to deliver IT business agility, deployment flexibility and elasticity. As a result new workloads are increasingly deployed in public clouds.  Worldwide public IT cloud service revenue in 2018 is predicted to be $127B.

Data reduction technology minimizes public cloud costs. For example, deduplication and compression typically cut capacity requirements of block storage in enterprise public cloud deployments by up to 6:1.  These savings are realized in reduced storage consumption and operating costs in public cloud deployments.

Consider AWS costs employing data reduction;

If you provision a 300TB of EBS General Purpose SSD (gp2) storage for 12 hours per day over a 30 day month in a region that charges $0.10 per GB-month, you would be charged $15,000 for the storage.

With data reduction, that monthly cost of $15,000 would be reduced to $2,500.  Over a 12 month period you will save $150,000.   Capacity planning is a simpler problem when it is 1/6th its former size.  Bottom line, data reduction increases agility and reduces costs of public clouds.

One data reduction application that can readily be applied in public cloud is Permabit’s Virtual Disk Optimizer (VDO) which is a pre-packaged software solution that installs and deploys in minutes on Red Hat Enterprise Linux and Ubuntu LTS Linux distributions.  To deploy VDO in Amazon AWS, the administrator provisions Elastic Block Storage (EBS) volumes, installs the VDO package into their VMs and applies VDO to the block devices represented for their EBS volumes.  Since VDO is implemented in the Linux device mapper, it is transparent to the applications installed above it.

To READ the complete article;

CLICK ON THE LINK BELOW

Read more

curata__72Bb2tTnqExjWmD.jpeg

Using Data Reduction at the OS layer in Enterprise Linux Environments

| Stock Market
curata__72Bb2tTnqExjWmD.jpeg

Enterprises and cloud service providers that have built their infrastructure around Linux should deploy data reduction in the operating system to drive costs down, say experts at Permabit Technology Corporation, the company behind Permabit Virtual Data Optimizer (VDO).  Permabit VDO is the only complete data reduction software for Linux, the world’s most popular server Operating System (OS). Permabit’s VDO software fills a gap in the Linux feature set by providing a cost effective, alternative to the data reduction services delivered as part of the two other major OS platforms – Microsoft Windows and VMware. IT architects are driven to cut costs as they build out their next generation infrastructure with one or more of these OS platforms in  public and/or private cloud deployments and one obvious way to do so is with data reduction.

When employed as a component of the OS, data reduction can be applied universally without lock-in of proprietary solutions. Adding compression, deduplication, and thin provisioning to the core OS, data reduction benefits can be leveraged by any application or infrastructure services running on that OS. This ensures that savings accrue across the entire IT infrastructure, delivering TCO advantages no matter where the data resides. This is the future of data reduction – as a ubiquitous service of the OS.

“We’re seeing movement away from proprietary storage solutions, where data reduction was a key differentiated feature, toward OS-based capabilities that are applied across an entire infrastructure,” said Tom Cook, Permabit CEO.  “Early adopters are reaping financial rewards through reduced cost of equipment, space, power and cooling. Today we are also seeing adoption of data reduction in the OS by more conservative IT organizations who are driven to take on more initiatives with tightly constrained IT budgets.”

VDO, with inline data deduplication, HIOPS Compression®, and fine-grained thin provisioning, is deployed as a device-mapper driver for Linux. This approach ensures compatibility with a full complement of direct-attached/ephemeral, block, file and object interfaces. VDO data reduction is available for Red Hat Enterprise Linux and Canonical Ubuntu Linux LTS distributions.

Advantages of in-OS data reduction technology include:

  • Improved density for public/private /hybrid cloud storage, resulting in lower storage and service costs
  • Vendor independent to function across hardware running the target OS
  • Seamless data mobility between on-premise and cloud resources
  • Up to six times lower IT infrastructure OpEx
  • Transparent to end users accessing data
  • Requires no modifications to existing applications, file systems, virtualization features, or data protection capabilities

With VDO, these advantages are being realized on Linux today. VDO deployments have been completed (or are currently in progress) with large telecommunications companies, government agencies, financial services firms and IaaS providers who have standardized on Linux for their data centers. With data reduction in Linux, enterprises achieve vendor independence across all Linux based storage, increased mobility of reduced data and hyper scale economics. What an unbeatable combination!

Read more

curata__1MK8EV7S10cRfWt.png

Reduce Cloud’s Highly Redundant Data

| By: (61)

Storage is the foundation of cloud services. All cloud services – delineated as scalable, elastic, on-demand, and self-service – begin with storage. Almost universally, cloud storage services are virtualized and hybrid cloud architectures that combine on-premise resources with colocation, private and public clouds result in highly redundant data environments.  IDC’s FutureScape report finds “Over 80% of enterprise IT organizations will commit to hybrid cloud architectures, encompassing multiple public cloud services,…

Read more

Hybrid Cloud

Hybrid Cloud Gains in Popularity, Survey Finds

| Light Reading
Hybrid Cloud

The hybrid model of cloud computing is gaining more popularity in the enterprise, as businesses move more workloads and applications to public cloud infrastructures and away from private deployments.

Those are some of the findings from RightScale’s annual “State of the Cloud” report, which the company released Wednesday. It’s based on interviews with 1,000 IT professionals, with 48% of them working in companies with more than 1,000 employees.

The biggest takeaway from the report is that enterprises and their IT departments are splitting their cloud dollars between public and private deployments, and creating demands for a hybrid approach.

“The 2017 State of the Cloud Survey shows that while hybrid cloud remains the preferred enterprise strategy, public cloud adoption is growing while private cloud adoption flattened and fewer companies are prioritizing building a private cloud,” according to a blog post accompanying the report. “This was a change from last year’s survey, where we saw strong gains in private cloud use.”

Specifically, 85% of respondents reported having a multi-cloud, hybrid strategy, and that’s up from the 82% who reported a similar approach in 2016. At the same time, private cloud adoption dropped from 77% in 2016 to 72% in 2017.

In the survey, 41% of respondents reported running workloads in public clouds, while 38% said they run workloads in private clouds. In large enterprises, those numbers reverse, with 32% of respondents running workloads in public clouds, and 43% running workloads within private infrastructures.

“It’s important to note that the workloads running in private cloud may include workloads running in existing virtualized environments or bare-metal environments that have been ‘cloudified,’ ” according to the report.

When it comes to adopting cloud technologies and services, there are less barriers and concerns this year compared to 2016. The lack of resources and expertise to implement a cloud strategy was still the top concern.

In addition the report notes that in every cloud expertise level the Top 5 Challenges” indicate there is a substantial concern with “managing costs”.  One vehicle that can help manage costs is to apply data reduction technologies to your cloud deployment. Permabit VDO can be applied to public and/or private clouds quickly and easily enabling cost reduction of 50% or more in on-premise, in-transit and public cloud deployments.

Read more

curata__NiNuor421kyqGol.png

Why Deduplication Matters for Cloud Storage

| dzone.com
curata__NiNuor421kyqGol.png

Most people assume cloud storage is cheaper than on-premise storage. After all, why wouldn’t they? You can rent object storage for $276 per TB per year or less, depending on your performance and access requirements. Enterprise storage costs between $2,500 to $4,000 per TB per year, according to analysts at Gartner and ESG.

This comparison makes sense for primary data, but what happens when you make backups or copies of data for other reasons in the cloud? Imagine that an enterprise needs to retain 3 years of monthly backups of a 100TB data set. In the cloud, this can be easily equated to 3.6 PB of raw backup data, or a monthly bill of over $83,000. That’s about $1 million a year before you even factor in and data access or retrieval charges.

That is precisely why efficient deduplication is hugely important for both on-premise and cloud storage, especially when enterprises want to retain their secondary data (backup, archival, long-term retention) for weeks, months, and years. Cloud storage costs can add up quickly, surprising even astute IT professionals, especially as data sizes get bigger with web-scale architectures, data gets replicated and they discover it can’t be deduplicated in the cloud.

The Promise of Cloud Storage: Cheap, Scalable, Forever Available

Cloud storage is viewed as cheap, reliable and infinitely scalable – which is generally true. Object storage like AWS S3 is available at just $23/TB per month for the standard tier, or $12.50/TB for the Infrequent Access tier. Many modern applications can take advantage of object storage. Cloud providers offer their own file or block options, such as AWS EBS (Elastic Block Storage) that starts at $100/TB per month, prorated hourly. Third-party solutions also exist that connect traditional file or block storage to object storage as a back-end.

Even AWS EBS, at $1,200/TB per year, compares favorably to on-premise solutions that cost 2-3 times as much, and require high upfront capital expenditures. To recap, enterprises are gravitating to the cloud because the OPEX costs are significantly lower, there’s minimal up-front cost, and you pay as you go (vs. traditional storage where you have to buy far ahead of actual need)

How Cloud Storage Costs Can Get Out of Hand: Copies, Copies Everywhere

The direct cost comparison between cloud storage and traditional on-premise storage can distract from managing storage costs in the cloud, particularly as more and more data and applications move there. There are three components to cloud storage costs to consider:

  • Cost for storing the primary data, either on object or block storage
  • Cost for any copies, snapshots, backups, or archive copies of data
  • Transfer charges for data

We’ve covered the first one. Let’s look at the other two.

Copies of data. It’s not how much data you put into the cloud — uploading data is free, and storing a single copy is cheap. It’s when you start making multiple copies of data — for backups, archives, or any other reason — that costs spiral if you’re not careful. Even if you don’t make actual copies of the data, applications or databases often have built-in data redundancy and replicate data (or in database parlance, a Replication Factor).

In the cloud, each copy you make of an object incurs the same cost as the original. Cloud providers may do some dedupe or compression behind the scenes, but this isn’t generally credited back to the customer. For example, in a consumer cloud storage service like DropBox, if you make a copy or ten copies of a file, each copy counts against your storage quota.

For enterprises, this means data snapshots, backups, and archived data all incur additional costs. As an example, AWS EBS charges $0.05/GB per month for storing snapshots. While the snapshots are compressed and only store incremental data, they’re not deduplicated. Storing a snapshot of that 100 TB dataset could cost $60,000 per year, and that’s assuming it doesn’t grow at all.

Data access. Public cloud providers generally charge for data transfer either between cloud regions or out of the cloud. For example, moving or copying a TB of AWS S3 data between Amazon regions costs $20, and transferring a TB of data out to the internet costs $90. Combined with GET, PUT, POST, LIST and DELETE request charges, data access costs can really add up.

Why Deduplication in the Cloud Matters

Cloud applications are distributed by design and are deployed on non-relational massively scalable databases as a standard. In non-relational databases, most data is redundant before you even make a copy. There are common blocks, objects, and databases like MongoDB or Cassandra have replication factor (RF) of 3 to ensure data integrity in a distributed cluster, so you start out with three copies.

Backups or secondary copies are usually created and maintained via snapshots (for example, using EBS snapshots as noted earlier). The database architecture means that when you take a snapshot, you’re really making three copies of the data. Without any deduplication, this gets really expensive.

Today there are solutions to solve the public cloud deduplication or data reduction conundrum. Permabit VDO can be easily deployed in public and/or private cloud solutions  Take a look at the following blog from Tom Cook http://permabit.com/data-efficiency-in-public-clouds/ or for the technical details look at one from Louis Imershein http://permabit.com/effective-use-of-data-reduction-in-the-public-cloud/. Both provide examples and details on why and how to drive deduplication and compression solutions in a public cloud.

 

 

Read more

curata__sgU8qd9uQLE3KYJ.jpeg

Why 2017 will belong to open source

| CIO News
curata__sgU8qd9uQLE3KYJ.jpeg

A few years ago, open source was the less-glamorous and low-cost alternative in the enterprise world, and no one would have taken the trouble to predict what its future could look like. Fast-forward to 2016, many of us will be amazed by how open source has become the de facto standard for nearly everything inside an enterprise. Open source today is the primary engine for innovation and business transformation. Cost is probably the last reason for an organisation to go in for open source.

An exclusive market study conducted by North Bridge and Black Duck brought some fascinating statistics a few months ago. In the study titled “Future of Open Source”, about 90% of surveyed organisations said that open source improves efficiency, interoperability and innovation. What is even more significant is the finding that the adoption of open source for production environments outpaced the proprietary software for the first time – more than 55% leverage OSS for production infrastructure.

OpenStack will rule the cloud world
OpenStack has already made its presence felt as an enterprise-class framework for the cloud. An independent study, commissioned by SUSE, reveals that 81 percent of senior IT professionals are planning to move or are already moving to OpenStack Private Cloud. What is more, the most innovative businesses and many Fortune 100 businesses have already adopted OpenStack for their production environment.

As cloud becomes the foundation on which your future business will be run, OpenStack gains the upper hand with its flexibility, agility, performance and efficiency. Significant cost reduction is another major consideration for organisations, especially the large enterprises. Because a proprietary cloud platform is excessively expensive to build and maintain and operations of Open Stack deliver baseline cost reductions. In addition data reduction in an Open Stack deployment can further reduce operating costs.

Open source to be at the core of digital transformation
Digital transformation is, in fact, one of the biggest headaches for CIOs because of its sheer heterogeneous and all-pervading nature. With the data at the center of digital transformation, it is often impossible for CIOs to ensure that the information that percolates down is insightful and secure at the same time. They need a platform which is scalable, flexible, allows innovations and is quick enough to turn around. This is exactly what Open Source promises. Not just that, with the current heterogeneous environments that exist in enterprises, interoperability is going to be the most critical factor.

Technologies like Internet of Things (IoT) and SMAC (social, mobile, analytics and cloud) will make data more valuable and voluminous. The diversity of devices and standards that will emerge will make open source a great fit for enterprises to truly leverage these trends. It is surprising to know that almost all ‘digital enterprises’ in the world are already using open source platforms and tools to a great extent. The pace of innovation that open source communities can bring to the table is unprecedented.

Open source-defined data centers
A recent research paper from IDC states that 85 percent of the surveyed enterprises globally consider open source to be the realistic or preferred solution for migrating to software-defined infrastructure (SDI). IDC also recommends to avoiding vendor lock-in by deploying open source solutions. Interestingly, many organisations seem to have already understood the benefits of open source clearly, with Linux adoption in the data centers growing steadily at a pace of 15-20%.

The key drivers of SDI – efficiency, scalability and reliability at minimal investment – can be achieved only with the adoption of open source platforms. Open source helps the enterprises to be agiler in building, deploying and maintaining applications. In the coming days, open source adoption is going to be essential for achieving true ‘zero-downtime’ in Software-Defined-Infrastructure.

The open source will have specifically large role to play in the software-defined-storage (SDS) space. It will help organisations in overcoming the current challenges associated with SDS. Open SDS solutions can scale infinitely without a need to refresh the entire platform or disrupt the existing functioning environment.

Data Reduction will easily be added to SDS or OS environments with Permabit VDO. A simple plug and play approach that will enable 2X or more storage reduction will add to the already efficient operations of open source deployments.

Open source to be decisive in enterprise DevOps journey
Today, software and applications have a direct impact on business success and performance. As a reason, development, testing, delivery, and maintenance of applications have become very crucial. In the customer-driven economy, it is imperative for organisations to have DevOps and containerisation technologies to increase release cycles and quality of applications.

Often, enterprises struggle to get the most out of DevOps model. The investment associated with replicating the production environments for testing the apps is not negligible. They also fail to ensure that the existing systems are not disturbed while running a testing environment within containers.

Industry analysts believe that microservices running in Docker-like containers, on an open and scalable cloud infrastructure are the future of applications. OpenStack-based cloud infrastructures are going to be an absolute necessity for enterprises for a successful DevOp journey. The flexibility and interoperability apart, the open cloud allows the DevOps team to reuse the same infrastructure as and when containers are created.

In 2017, it is expected to see open source becoming the first preference for organisations that are at the forefront of innovation.

Read more

curata__W1e4Dlq7mDO3w4q.png

Cloud IT Spending to Edge Out Traditional Data Centers by 2020

| Datamation
curata__W1e4Dlq7mDO3w4q.png

The IT solutions market for cloud providers has nowhere to go but up.

A new forecast from IDC predicts that cloud IT infrastructure spending on servers, storage and network switches will jump 18.2 percent this year to reach $44.2 billion. Public clouds will generate 61 percent of that amount and off-premises private clouds will account for nearly 15 percent.

IDC research director Natalya Yezhkova, said that over the next few quarters, “growth in spending on cloud IT infrastructure will be driven by investments done by new hyperscale data centers opening across the globe and increasing activity of tier-two and regional service providers,” in a statement.

Additionally, businesses are also growing more adept at floating their own private clouds, she said. “Another significant boost to overall spending on cloud IT infrastructure will be coming from on-premises private cloud deployments as end users continue gaining knowledge and experience in setting up and managing cloud IT within their own data centers.”

Despite a 3 percent decline in spending on non-cloud IT infrastructure during 2017, the segment will still command the majority (57 percent) of all revenues. By 2020, however, the tables will turn.

Combined, the public and private data center infrastructure segments will reach a major tipping point in 2020, accounting for nearly 53 percent of the market, compared to just over 47 percent for traditional data center gear. Public cloud operators and private cloud environments will drive $48.1 billion in IT infrastructure sales by that year.

Indeed, the cloud computing market is growing by leaps and bounds.

The shifting sands are both predictable and evolutionary. Dominant data center spending has been platform specific and somewhat captive. As public cloud providers demonstrated, efficient data center operations are being deployed with white box platforms and high performance open -source software stacks that minimize costs and eliminate software bloat.  Corporate IT professionals didn’t miss this evolution and have begun developing similar IT infrastructures. They are sourcing white box platform’s which are much less costly than branded platforms and combining them with open-source software including operating systems, software defined storage with data reduction that drives down storage consumption too.   The result is a more efficient data center with less costly hardware and open-source software that drives down acquisition and operating costs.

The shift is occurring and the equilibrium between public and private clouds will change. Not just because of hardware but increasingly because of open-source software and the economic impact it has on building high density data centers that run more efficiently  than the branded platforms.

Read more

curata__0834fd2da8ef75099173106b192b06d7.PNG

VDO in 10 Top Data Storage Applications

| InfoStor
curata__0834fd2da8ef75099173106b192b06d7.PNG

There are so many data storage applications out there that whittling down the list to a handful was quite a challenge. In fact, it proved impossible.

So we are doing two stories on this subject. Even then, there are many good candidates that aren’t included. To narrow things down a little, therefore, we omitted back up, disaster recovery (DR), performance tuning, WAN optimization and similar applications. Otherwise, we’d have to cover just about every storage app around.

We also tried to eliminate cloud-based storage services as there are so many of them. But that wasn’t entirely possible because the lines between on-premise and cloud services are blurring as software defined storage encroaches further on the enterprise. As a result, storage services from the likes of Microsoft Azure, Amazon and one or two others are included.

Storage Spaces Direct

Storage Spaces Direct (S2D) for Windows Server 2016 uses a new software storage bus to turn servers with local-attached drives into highly available and scalable software-defined storage. The Microsoft pitch is that this is done at a tiny fraction of the cost of a traditional SAN or NAS. It can be deployed in a converged or hyper-converged architecture to make deployment relatively simple. S2D also includes caching, storage tiering, erasure coding, RDMA networking and the use of NVMe drives mounted directly on the PCIe bus to boost performance.

“S2D allows software-defined storage to manage direct attached storage (SSD and HDD) including allocation, availability, capacity and performance optimization,” said Greg Schulz, an analyst at StorageIO Group. “It is integrated with the Windows Server operating systems, so it is leveraging familiar tools and expertise to support Windows, Hyper-V, SQL Server and other workloads.”

Red Hat Ceph Storage

Red Hat’s data storage application for OpenStack is Red Hat Ceph Storage. It is an open, scalable, software-defined storage system that runs on industry-standard hardware. Designed to manage petabytes of data as well as cloud and emerging workloads, Ceph is integrated with OpenStack to offer a single platform for all its block, object, and file storage needs. Red Hat Ceph Storage is priced based on the amount of storage capacity under management.

“Ceph is a software answer to the traditional storage appliance, and it brings all the benefits of modern software – it’s scale-out, flexible, tunable, and programmable,” said Daniel Gilfix, product marketing, Red Hat Storage. “New workloads are driving businesses towards an increasingly software-defined datacenter. They need greater cost efficiency, more control of data, less time-consuming maintenance, strong data protection and the agility of the cloud.”

Virtual Data Optimizer

Gilfix is also a fan of Virtual Data Optimizer (VDO) software from Permabit. This data efficiency software uses compression, deduplucation and zero-elimination on the data you store, making it take up less space. It runs as a Linux kernel module, sitting underneath almost any software – including Gluster or Ceph. Pricing starts at $199 per node for up to 16 TB of storage. A 256 TB capacity-based license is available for $3,000.

“Just as server virtualization revolutionized the economics of compute, Permabit data reduction software has the potential to transform the economics of storage,” said Gilfix. “VDO software reduces the amount of disk space needed by 2:1 in most scenarios and up to 10:1 in virtual environments (vdisk).”

VMware vSAN

VMware vSAN is a great way to pool internal disks for vSphere environments. It extends virtualization to storage and is fully integrated with vSphere. Policy-based management is also included, so you can set per-VM policies and automate provisioning. Due to its huge partner ecosystem, it supports a wide range of applications, containers, cloud services and more. When combined with VMware NSX, a vSAN-powered software defined data center can extend on-premise storage and management services across different public clouds to give a more consistent experience.

OpenIO

OpenIO is described as all-in-one object storage and data processing. It is available as a software-only solution or via the OpenIO SLS (ServerLess Storage) platform. The software itself is open source and available online. It allows users to operate petabytes of object storage. It wraps storage, data protection and processing in one package that can run on any hardware. OpenIO’s tiering enables automated load-balancing and establishes large data lakes for such applications as analytics.

The SLS version is a storage appliance that combines high-capacity drives, a 40Gb/s Ethernet backend and Marvell Armada-3700 dual core ARM 1.2Ghz processors. It can host up to 96 nodes, each with a 3.5″ HDD or SSD. This offers a petabyte scale-out storage system in a 4U chassis.

StarWind

StarWind Virtual SAN is a virtualization infrastructure targeted at SMBs, remote offices and branch offices, as well as cloud providers. It is said to cut down on the cost of storage virtualization using a technique that mirrors internal hard disks and flash between hypervisor servers. This software-defined storage approach is also designed for ease of use. Getting started requires two licensed nodes and can be expanded beyond that. It comes with asynchronous replication, in-line and offline deduplication, multi-tiered RAM and flash cache.

IBM Spectrum Virtualize

IBM Spectrum Virtualize deals with block-oriented virtual storage. It is available as standalone software or can be used to power IBM all-flash products. The software provides data services such as storage virtualization, thin provisioning, snapshots, cloning, replication, data copying and DR. It makes it possible to virtualize all storage on the same Intel hardware without any additional software or appliances.

“Spectrum Virtualize supports common data services such as snapshots and replication in nearly 400 heterogeneous storage arrays,” said David Hill, Mesabi Group. “It simplifies operational storage management and is available for x86 servers.”

Dell EMC ECS

Dell EMC Elastic Cloud Storage (ECS) is available as a software-defined storage appliance or as software that could be deployed on commodity hardware. This object storage platform provides support for object, file and HDFS. It is said to make app development faster via API accessible storage, and it also enables organizations to consolidate multiple storage systems and content archives into a single, globally accessible content repository that can host many applications.

NetApp ONTAP Cloud

NetApp ONTAP Cloud is a software-only storage service operating on the NetApp ONTAP storage platform that provides NFS, CIFS and iSCSI data management for the cloud. It includes a single interface to all ONTAP-based storage in the cloud and on premises via its Cloud Manager feature. It is also cloud-agnostic, i.e., it is said to offer enterprise-class data storage management across cloud vendors. Thus it aims to combine cloud flexibility with high availability. Business continuity features are also included.

Quantum StorNext

Quantum’s longstanding StorNext software continues to find new avenues of application in the enterprise. StorNext 5 is targeted at the high-performance shared storage market. It is said to accelerate complex information workflows. The StorNext 5 file system can manage Xcellis workflow storage, extended online storage and tape archives via advanced data management capabilities. Billed as the industry’s fastest streaming file system and policy-based tiering software, it is designed for large sets of large files and complex information workflows.

Read more