curata__72Bb2tTnqExjWmD.jpeg

Using Data Reduction at the OS layer in Enterprise Linux Environments

| Stock Market
curata__72Bb2tTnqExjWmD.jpeg

Enterprises and cloud service providers that have built their infrastructure around Linux should deploy data reduction in the operating system to drive costs down, say experts at Permabit Technology Corporation, the company behind Permabit Virtual Data Optimizer (VDO).  Permabit VDO is the only complete data reduction software for Linux, the world’s most popular server Operating System (OS). Permabit’s VDO software fills a gap in the Linux feature set by providing a cost effective, alternative to the data reduction services delivered as part of the two other major OS platforms – Microsoft Windows and VMware. IT architects are driven to cut costs as they build out their next generation infrastructure with one or more of these OS platforms in  public and/or private cloud deployments and one obvious way to do so is with data reduction.

When employed as a component of the OS, data reduction can be applied universally without lock-in of proprietary solutions. Adding compression, deduplication, and thin provisioning to the core OS, data reduction benefits can be leveraged by any application or infrastructure services running on that OS. This ensures that savings accrue across the entire IT infrastructure, delivering TCO advantages no matter where the data resides. This is the future of data reduction – as a ubiquitous service of the OS.

“We’re seeing movement away from proprietary storage solutions, where data reduction was a key differentiated feature, toward OS-based capabilities that are applied across an entire infrastructure,” said Tom Cook, Permabit CEO.  “Early adopters are reaping financial rewards through reduced cost of equipment, space, power and cooling. Today we are also seeing adoption of data reduction in the OS by more conservative IT organizations who are driven to take on more initiatives with tightly constrained IT budgets.”

VDO, with inline data deduplication, HIOPS Compression®, and fine-grained thin provisioning, is deployed as a device-mapper driver for Linux. This approach ensures compatibility with a full complement of direct-attached/ephemeral, block, file and object interfaces. VDO data reduction is available for Red Hat Enterprise Linux and Canonical Ubuntu Linux LTS distributions.

Advantages of in-OS data reduction technology include:

  • Improved density for public/private /hybrid cloud storage, resulting in lower storage and service costs
  • Vendor independent to function across hardware running the target OS
  • Seamless data mobility between on-premise and cloud resources
  • Up to six times lower IT infrastructure OpEx
  • Transparent to end users accessing data
  • Requires no modifications to existing applications, file systems, virtualization features, or data protection capabilities

With VDO, these advantages are being realized on Linux today. VDO deployments have been completed (or are currently in progress) with large telecommunications companies, government agencies, financial services firms and IaaS providers who have standardized on Linux for their data centers. With data reduction in Linux, enterprises achieve vendor independence across all Linux based storage, increased mobility of reduced data and hyper scale economics. What an unbeatable combination!

Read more

curata__UOYLNvle4ThKgip.jpeg

Addressing Bandwidth Challenges in the Hybrid Cloud

| By: (53)

Any application infrastructure that relies on a single data center is only as safe as that data center’s physical resources and the competence of its staff.  Witness the recent S3 outage at Amazon. When you choose to deploy in a single public cloud, you are delegating infrastructure management to your provider. When you’re exclusively running in-house, private cloud infrastructure, you’re entrusting that management to your own organization.  Either way mistakes…

Read more

curata__1MK8EV7S10cRfWt.png

Reduce Cloud’s Highly Redundant Data

| By: (60)

Storage is the foundation of cloud services. All cloud services – delineated as scalable, elastic, on-demand, and self-service – begin with storage. Almost universally, cloud storage services are virtualized and hybrid cloud architectures that combine on-premise resources with colocation, private and public clouds result in highly redundant data environments.  IDC’s FutureScape report finds “Over 80% of enterprise IT organizations will commit to hybrid cloud architectures, encompassing multiple public cloud services,…

Read more

Hybrid Cloud

Hybrid Cloud Gains in Popularity, Survey Finds

| Light Reading
Hybrid Cloud

The hybrid model of cloud computing is gaining more popularity in the enterprise, as businesses move more workloads and applications to public cloud infrastructures and away from private deployments.

Those are some of the findings from RightScale’s annual “State of the Cloud” report, which the company released Wednesday. It’s based on interviews with 1,000 IT professionals, with 48% of them working in companies with more than 1,000 employees.

The biggest takeaway from the report is that enterprises and their IT departments are splitting their cloud dollars between public and private deployments, and creating demands for a hybrid approach.

“The 2017 State of the Cloud Survey shows that while hybrid cloud remains the preferred enterprise strategy, public cloud adoption is growing while private cloud adoption flattened and fewer companies are prioritizing building a private cloud,” according to a blog post accompanying the report. “This was a change from last year’s survey, where we saw strong gains in private cloud use.”

Specifically, 85% of respondents reported having a multi-cloud, hybrid strategy, and that’s up from the 82% who reported a similar approach in 2016. At the same time, private cloud adoption dropped from 77% in 2016 to 72% in 2017.

In the survey, 41% of respondents reported running workloads in public clouds, while 38% said they run workloads in private clouds. In large enterprises, those numbers reverse, with 32% of respondents running workloads in public clouds, and 43% running workloads within private infrastructures.

“It’s important to note that the workloads running in private cloud may include workloads running in existing virtualized environments or bare-metal environments that have been ‘cloudified,’ ” according to the report.

When it comes to adopting cloud technologies and services, there are less barriers and concerns this year compared to 2016. The lack of resources and expertise to implement a cloud strategy was still the top concern.

In addition the report notes that in every cloud expertise level the Top 5 Challenges” indicate there is a substantial concern with “managing costs”.  One vehicle that can help manage costs is to apply data reduction technologies to your cloud deployment. Permabit VDO can be applied to public and/or private clouds quickly and easily enabling cost reduction of 50% or more in on-premise, in-transit and public cloud deployments.

Read more

curata__NiNuor421kyqGol.png

Why Deduplication Matters for Cloud Storage

| dzone.com
curata__NiNuor421kyqGol.png

Most people assume cloud storage is cheaper than on-premise storage. After all, why wouldn’t they? You can rent object storage for $276 per TB per year or less, depending on your performance and access requirements. Enterprise storage costs between $2,500 to $4,000 per TB per year, according to analysts at Gartner and ESG.

This comparison makes sense for primary data, but what happens when you make backups or copies of data for other reasons in the cloud? Imagine that an enterprise needs to retain 3 years of monthly backups of a 100TB data set. In the cloud, this can be easily equated to 3.6 PB of raw backup data, or a monthly bill of over $83,000. That’s about $1 million a year before you even factor in and data access or retrieval charges.

That is precisely why efficient deduplication is hugely important for both on-premise and cloud storage, especially when enterprises want to retain their secondary data (backup, archival, long-term retention) for weeks, months, and years. Cloud storage costs can add up quickly, surprising even astute IT professionals, especially as data sizes get bigger with web-scale architectures, data gets replicated and they discover it can’t be deduplicated in the cloud.

The Promise of Cloud Storage: Cheap, Scalable, Forever Available

Cloud storage is viewed as cheap, reliable and infinitely scalable – which is generally true. Object storage like AWS S3 is available at just $23/TB per month for the standard tier, or $12.50/TB for the Infrequent Access tier. Many modern applications can take advantage of object storage. Cloud providers offer their own file or block options, such as AWS EBS (Elastic Block Storage) that starts at $100/TB per month, prorated hourly. Third-party solutions also exist that connect traditional file or block storage to object storage as a back-end.

Even AWS EBS, at $1,200/TB per year, compares favorably to on-premise solutions that cost 2-3 times as much, and require high upfront capital expenditures. To recap, enterprises are gravitating to the cloud because the OPEX costs are significantly lower, there’s minimal up-front cost, and you pay as you go (vs. traditional storage where you have to buy far ahead of actual need)

How Cloud Storage Costs Can Get Out of Hand: Copies, Copies Everywhere

The direct cost comparison between cloud storage and traditional on-premise storage can distract from managing storage costs in the cloud, particularly as more and more data and applications move there. There are three components to cloud storage costs to consider:

  • Cost for storing the primary data, either on object or block storage
  • Cost for any copies, snapshots, backups, or archive copies of data
  • Transfer charges for data

We’ve covered the first one. Let’s look at the other two.

Copies of data. It’s not how much data you put into the cloud — uploading data is free, and storing a single copy is cheap. It’s when you start making multiple copies of data — for backups, archives, or any other reason — that costs spiral if you’re not careful. Even if you don’t make actual copies of the data, applications or databases often have built-in data redundancy and replicate data (or in database parlance, a Replication Factor).

In the cloud, each copy you make of an object incurs the same cost as the original. Cloud providers may do some dedupe or compression behind the scenes, but this isn’t generally credited back to the customer. For example, in a consumer cloud storage service like DropBox, if you make a copy or ten copies of a file, each copy counts against your storage quota.

For enterprises, this means data snapshots, backups, and archived data all incur additional costs. As an example, AWS EBS charges $0.05/GB per month for storing snapshots. While the snapshots are compressed and only store incremental data, they’re not deduplicated. Storing a snapshot of that 100 TB dataset could cost $60,000 per year, and that’s assuming it doesn’t grow at all.

Data access. Public cloud providers generally charge for data transfer either between cloud regions or out of the cloud. For example, moving or copying a TB of AWS S3 data between Amazon regions costs $20, and transferring a TB of data out to the internet costs $90. Combined with GET, PUT, POST, LIST and DELETE request charges, data access costs can really add up.

Why Deduplication in the Cloud Matters

Cloud applications are distributed by design and are deployed on non-relational massively scalable databases as a standard. In non-relational databases, most data is redundant before you even make a copy. There are common blocks, objects, and databases like MongoDB or Cassandra have replication factor (RF) of 3 to ensure data integrity in a distributed cluster, so you start out with three copies.

Backups or secondary copies are usually created and maintained via snapshots (for example, using EBS snapshots as noted earlier). The database architecture means that when you take a snapshot, you’re really making three copies of the data. Without any deduplication, this gets really expensive.

Today there are solutions to solve the public cloud deduplication or data reduction conundrum. Permabit VDO can be easily deployed in public and/or private cloud solutions  Take a look at the following blog from Tom Cook http://permabit.com/data-efficiency-in-public-clouds/ or for the technical details look at one from Louis Imershein http://permabit.com/effective-use-of-data-reduction-in-the-public-cloud/. Both provide examples and details on why and how to drive deduplication and compression solutions in a public cloud.

 

 

Read more

curata__BM0mQtBrwzT8R4R.jpeg

Effective use of data reduction in the Public Cloud

| By: (53)

  Permabit CEO, Tom Cook recently wrote about how data reduction technology can simplify the problems associated with provisioning adequate storage resources in the public cloud, while balancing performance and efficiency.  The good news is, taking advantage of data reduction software in the public cloud is easier than ever. For example,Permabit’s Virtual Disk Optimizer (VDO) is a pre-packaged software solution that installs and deploys in minutes on Red Hat Enterprise…

Read more

curata__1b778fd225c6d175175f45b2f20b1559.png

Data Efficiency in Public Clouds

| By: (60)

Public cloud deployments deliver agility, flexibility and elasticity. This is why new workloads are increasingly deployed in public clouds.  Worldwide public IT cloud service revenue in 2018 is predicted to be $127B.  It’s powerful to spin up a data instance instantaneously, however managing workloads and storage still requires analysis, planning and monthly provisioning.  It would be extremely advantageous if public cloud storage capacity could automatically grow and condense to optimize…

Read more

curata__UGjgu400PJOAfFP.png

Future Software-Defined Datacenters Defined by Abstraction and Hardware Commoditization

| informationweek.com
curata__UGjgu400PJOAfFP.png

The emergence of agile digital business has changed the way we interact with technology and services, and defined new ways of building datacenters and converged infrastructures. The “as-a-service” concept has also been implemented in virtualized infrastructures to boost automation and flexibility without hampering performance or adding to costs.

Software-defined datacenters (SDDC) are the newest model for building, managing and operating large pools of physical resources without worrying about interoperability between hardware vendors or even hypervisors. Abstraction is key to hyperconverged infrastructures as it allows software to simplify operations and manage complex infrastructures.

Converged vs. Hyperconverged Infrastructures

Converged infrastructures (CI) allowed for computing, storage, networking and virtualization to be built into a single chassis, and hyperconverged infrastructures (HCI) builds on top of that by tightening the interaction between all these components with an extra software-defined layer. However, converged infrastructures don’t usually allow much flexibility in configuration, as the purchased hardware is usually vendor-dependent and additional components are normally managed separately.

Hyperconverged infrastructures (HCI) are built to be hardware-agnostic and focused more on building on top of converged infrastructures by adding more components, such as data deduplication, WAN optimization and inline compression. The ability to manage the entire infrastructure through a single system and common toolset enables infrastructure expansion through simple point-and-click actions and checkboxes.

Separating physical hardware from infrastructure operations means that workloads and applications can work together more tightly than in legacy or converged infrastructures. At the same time, having a storage controller that acts as a service running on a node means that directly attaching data storage to physical machines is no longer necessary — any new storage will be part of a cluster and configured as part of a single storage pool.

Software-Defined Datacenters

While most of today’s organizations are probably not ready to adopt software-defined datacenters – and those that do probably fit into the visionary category – IT decision makers need to understand the business cases, use cases and risks associated with SDDSs. Because hyperconvergence is the actual definition of a software-defined datacenter, IT decision makers should proceed with caution when implementing it as they need to make sure that it delivers the best results for their business.

Gartner predicted that SDDCs will be the future of digital business, with 75 percent of top enterprises considering it mandatory by 2020. We’ve already seen hybrid cloud adoption increase through the integration of software and commodity datacenter hardware offered by public cloud vendors. The rise of SDDCs will probably also be fueled by the need for businesses to become more agile in terms of IT solutions that satisfy business growth and continuity.

Read more

curata__sgU8qd9uQLE3KYJ.jpeg

Why 2017 will belong to open source

| CIO News
curata__sgU8qd9uQLE3KYJ.jpeg

A few years ago, open source was the less-glamorous and low-cost alternative in the enterprise world, and no one would have taken the trouble to predict what its future could look like. Fast-forward to 2016, many of us will be amazed by how open source has become the de facto standard for nearly everything inside an enterprise. Open source today is the primary engine for innovation and business transformation. Cost is probably the last reason for an organisation to go in for open source.

An exclusive market study conducted by North Bridge and Black Duck brought some fascinating statistics a few months ago. In the study titled “Future of Open Source”, about 90% of surveyed organisations said that open source improves efficiency, interoperability and innovation. What is even more significant is the finding that the adoption of open source for production environments outpaced the proprietary software for the first time – more than 55% leverage OSS for production infrastructure.

OpenStack will rule the cloud world
OpenStack has already made its presence felt as an enterprise-class framework for the cloud. An independent study, commissioned by SUSE, reveals that 81 percent of senior IT professionals are planning to move or are already moving to OpenStack Private Cloud. What is more, the most innovative businesses and many Fortune 100 businesses have already adopted OpenStack for their production environment.

As cloud becomes the foundation on which your future business will be run, OpenStack gains the upper hand with its flexibility, agility, performance and efficiency. Significant cost reduction is another major consideration for organisations, especially the large enterprises. Because a proprietary cloud platform is excessively expensive to build and maintain and operations of Open Stack deliver baseline cost reductions. In addition data reduction in an Open Stack deployment can further reduce operating costs.

Open source to be at the core of digital transformation
Digital transformation is, in fact, one of the biggest headaches for CIOs because of its sheer heterogeneous and all-pervading nature. With the data at the center of digital transformation, it is often impossible for CIOs to ensure that the information that percolates down is insightful and secure at the same time. They need a platform which is scalable, flexible, allows innovations and is quick enough to turn around. This is exactly what Open Source promises. Not just that, with the current heterogeneous environments that exist in enterprises, interoperability is going to be the most critical factor.

Technologies like Internet of Things (IoT) and SMAC (social, mobile, analytics and cloud) will make data more valuable and voluminous. The diversity of devices and standards that will emerge will make open source a great fit for enterprises to truly leverage these trends. It is surprising to know that almost all ‘digital enterprises’ in the world are already using open source platforms and tools to a great extent. The pace of innovation that open source communities can bring to the table is unprecedented.

Open source-defined data centers
A recent research paper from IDC states that 85 percent of the surveyed enterprises globally consider open source to be the realistic or preferred solution for migrating to software-defined infrastructure (SDI). IDC also recommends to avoiding vendor lock-in by deploying open source solutions. Interestingly, many organisations seem to have already understood the benefits of open source clearly, with Linux adoption in the data centers growing steadily at a pace of 15-20%.

The key drivers of SDI – efficiency, scalability and reliability at minimal investment – can be achieved only with the adoption of open source platforms. Open source helps the enterprises to be agiler in building, deploying and maintaining applications. In the coming days, open source adoption is going to be essential for achieving true ‘zero-downtime’ in Software-Defined-Infrastructure.

The open source will have specifically large role to play in the software-defined-storage (SDS) space. It will help organisations in overcoming the current challenges associated with SDS. Open SDS solutions can scale infinitely without a need to refresh the entire platform or disrupt the existing functioning environment.

Data Reduction will easily be added to SDS or OS environments with Permabit VDO. A simple plug and play approach that will enable 2X or more storage reduction will add to the already efficient operations of open source deployments.

Open source to be decisive in enterprise DevOps journey
Today, software and applications have a direct impact on business success and performance. As a reason, development, testing, delivery, and maintenance of applications have become very crucial. In the customer-driven economy, it is imperative for organisations to have DevOps and containerisation technologies to increase release cycles and quality of applications.

Often, enterprises struggle to get the most out of DevOps model. The investment associated with replicating the production environments for testing the apps is not negligible. They also fail to ensure that the existing systems are not disturbed while running a testing environment within containers.

Industry analysts believe that microservices running in Docker-like containers, on an open and scalable cloud infrastructure are the future of applications. OpenStack-based cloud infrastructures are going to be an absolute necessity for enterprises for a successful DevOp journey. The flexibility and interoperability apart, the open cloud allows the DevOps team to reuse the same infrastructure as and when containers are created.

In 2017, it is expected to see open source becoming the first preference for organisations that are at the forefront of innovation.

Read more