Data Efficiency in the News

Permabit Debuts Only Complete Data Reduction for the Linux Storage Stack

| Source

VDO for hybrid cloud enables enterprises to maximize density, lower TCO

CAMBRIDGE, Mass – June 28, 2016 – Permabit Technology Corporation, the leader in data reduction technology, today announced the latest release of its Virtual Data Optimizer (VDO) software, VDO 6. The newest release of VDO delivers the company’s patented deduplication, HIOPS Compression™ and thin provisioning in a commercial software package for Linux, expanding availability beyond the OEM marketplace to include the leading Professional Services organizations that are enabling today’s modern Hybrid Cloud data centers.

New to this release is the VDO for Hybrid Cloud package, which simplifies the installation and configuration of VDO in data centers running Red Hat Enterprise Linux.  Also new is the addition of the VDO Optimizer™ file system, which provides up to 20x improvement in data reduction rates when used with existing archive and backup applications.

“Technology professionals are shifting their priorities. Containers, simplified management, security, cloud enablement and IoT are top of mind for organizations and bring new complexities to datacenters. As data growth continues unabated, businesses are confronted with the challenge of scaling new capabilities while remaining competitive,” said David Vellante, co-founder and Chief Analyst, Wikibon. “To do this, they are adopting hybrid cloud strategies that are driving high density computing beyond current physical and electrical limits. In order to cope with the expense associated with ongoing operations and datacenter footprint growth, data reduction software has become table stakes for optimizing storage density and network utilization. Solutions such as Permabit’s VDO fit nicely into hybrid cloud strategies and enable the funding of transformative business technology initiatives.”   

VDO is the only modular data reduction solution available for the Linux block storage stack that works with the broad range of open source and commercial software solutions.  As a ready-to-run kernel module for Linux, VDO works directly with Linux block devices and file systems across all types of cloud storage. This unique block-level approach allows Permabit customers to leverage existing file systems, volume management and data protection to deliver 4K inline, highly scalable data reduction in their Linux storage environments. Out of the box, VDO supports block, file and object storage on Red Hat Enterprise Linux and is compatible with Red Hat OpenStack, Ceph and Gluster.

“Widespread adoption of public, private and hybrid cloud computing is ushering in a new age of efficiency in IT,” said Tom Cook, Permabit CEO. “Data reduction increases data center density and maximizes cloud efficiency.  By introducing data reduction technologies, like Permabit VDO, on a global basis we could see $1.5 trillion saved in data center build-out, $10 billion saved in power costs and the prevention of 20 million metric tons of carbon emissions by 2020. This not only saves businesses precious capital outlays, it will help save the planet! Embracing efficiency is the only way to get more into your cloud.”

VDO for Hybrid Cloud is currently being evaluated by the world’s largest financial and communications companies as well as large government agencies.  It is available immediately to Permabit storage OEMs and Hybrid Cloud Professional Services partners.

For additional information on VDO 6 visit us at http://www.permabit.com.

About Permabit

Permabit pioneers the development of data reduction software that provides data deduplication, compression, and thin provisioning. Our innovative products enable customers to get to market quickly with solutions that cut effective cost, accelerate performance, and gain a competitive advantage. Just as server virtualization revolutionized the economics of compute, Permabit software is transforming the economics of storage today.

Permabit is headquartered in Cambridge, Massachusetts with operations in California, Korea and Japan. For more information, visit www.permabit.com.

###

Read more

curata__AJLNA97OQToCJN4.png

Red Hat Launches OpenShift Primed to Accelerate Container Platform Value with New Partner Technologies

| Yahoo Finance
curata__AJLNA97OQToCJN4.png

Red Hat, Inc. (RHT), the world’s leading provider of open source solutions, today introduced OpenShift Primed, a new partner designation to recognize a growing ecosystem of independent software vendors (ISVs) who are integrating solutions with Openshift, Red Hat’s container application platform. As global enterprise customers explore container adoption, OpenShift Primed is designed to not only recognize partners in active development with OpenShift, but also give customers better visibility to solutions being built around OpenShift.

Red Hat OpenShift Container Platform (formerly known as OpenShift Enterprise) is the industry’s first web-scale container application platform based on Docker-format Linux containers, Kubernetes orchestration, and Red Hat Enterprise Linux 7. As interest in containers moves from development and testing into production, Red Hat is committed to fostering a strong ecosystem of container-ready solutions that can run across the datacenter’s four footprints while maintaining the strong application consistency ISVs require.

Customers can also view OpenShift Primed solutions on the OpenShift Hub, offering insight into an extensive ecosystem of ISV solutions that can complement their Red Hat deployments. Existing OpenShift customers can access and review the catalog of OpenShift Primed solutions and test them in their own environments according to specific technology needs.

At launch, more than 15 ISVs have earned the OpenShift Primed designation, including: 3scale; 6fusion; CloudBees; CloudMunch; Couchbase; Crunchy Data; Diamanti; Dynatrace; GitLab; Iron.io; NGINX; Nuage Networks; Pachyderm; Roambee; and Sysdig.

 

Read more

curata__94268666bb6689de67da501a638be5e2.PNG

Sometimes Cloud Bursting is Bad – Permabit Briefing Note

| Storage Swiss
curata__94268666bb6689de67da501a638be5e2.PNG

IT professionals use the phrase “cloud bursting” to describe a process where they move compute and storage to the cloud when the local data center runs out of those resources. But there is another type of bursting that happens to cloud providers running out of data center floor space. In other words they are bursting at the seams and face the need to build a brand new data center facility.

Cloud service providers (CSP) face a unique challenge. They have to provide IT with services more efficiently and cost effectively than IT can do on their own. CSPs count on three technologies to meet those challenges for IT: virtualization and increased virtual machine density to reduce server footprint; open source software to massively reduce software acquisition costs; and white box servers to drive down the cost of physical hardware acquisition.

Most CSP’s build their infrastructures from a Linux core running on those white box servers. From there they add storage software like CEPH, Gluster or OpenStack. These solutions all scale-out. As they add nodes, they add compute and capacity resources at the same time, but most CSPs consume storage capacity at a faster rate than they consume CPU resources. As a result, they add nodes to get more capacity but they have CPU resources going to waste.

What CSPs have been missing is universal data efficiency so they can squeeze more data into the same physical space. But they need data efficiency, not to lower the price per GB of storage (although that never hurts), they need data efficiency to reduce the size of the physical footprint of the storage infrastructure so that CPU and capacity grow at a similar rate while keeping the data center footprint as small as possible and reducing operating costs (power and cooling).

There are only a few options available to CSPs wanting to add data efficiency to the environment, but most of them are specific to a single open source product. They also typically have performance and reliability concerns. What CSPs need is a single solution that will run on multiple open source software solutions, both in the cloud and on-premises (hybrid cloud). And that is the goal of Permabit’s new product VDO 6.0 for Hybrid Cloud.

VDO 6.0 for Hybrid Cloud

The latest version of VDO is designed specifically for the CSP market, which is concerned more with physical footprint savings to enable them to increase data density than dollar per GB savings because they understand that data center expansion is very costly. And unlike Permabit’s previous beachheads in primary storage and flash storage, these solutions run largely hard drives. The data efficiency solution has to be very efficient so it does not make a slow storage medium even slower. From a feature perspective, VDO provides deduplication and deduplication aware data compression. The 6.0 release is Linux based, supports Red Hat (RHEL) and provides support for the open source storage solutions mentioned above.

StorageSwiss Take

VDO 6.0 for Hybrid Cloud is an important next step for Permabit. It puts them squarely in the fastest growing part of the storage market and provides real value. It’s a different type of customer for Permabit, instead of being primarily concerned about performance, CSPs are concerned about physical footprint. Permabit can address that concern as well as deliver cost reduction and do so across a variety of storage software solutions supporting flash and/or HDD.

Read more

curata__93b34be0683b1a0c2be6a4d79eda1865.png

Permabit Debuts Only Complete Data Reduction for the Linux Storage Stack

| PR Newswire
curata__93b34be0683b1a0c2be6a4d79eda1865.png

Permabit Technology Corporation, the leader in data reduction technology, today announced the latest release of its Virtual Data Optimizer (VDO) software, VDO 6. The newest release of VDO delivers the company’s patented deduplication, HIOPS Compression™ and thin provisioning in a commercial software package for Linux, expanding availability beyond the OEM marketplace to include the leading Professional Services organizations that are enabling today’s modern Hybrid Cloud data centers.

New to this release is the VDO for Hybrid Cloud package, which simplifies the installation and configuration of VDO in data centers running Red Hat Enterprise Linux.  Also new is the addition of the VDO Optimizer™ file system, which provides up to 20x improvement in data reduction rates when used with existing archive and backup applications.

“As the volume of data they store continues to grow and at ever-increasing rates, IT infrastructure teams find themselves between this irresistible force and the relatively inelastic walls and power distribution systems of their data centers,” said Howard Marks, Chief Scientist at DeepStorage, LLC.  “A solution like Permabit’s VDO will not only optimize data on local storage but also in a hybrid cloud, significantly reducing the cost of cloud storage as well as the network load and storage ingest charges, since data is reduced before it’s transferred.”

VDO is the only modular data reduction solution available for the Linux block storage stack that works with the broad range of open source and commercial software solutions.  As a ready-to-run kernel module for Linux, VDO works directly with Linux block devices and file systems across all types of cloud storage. This unique block-level approach allows Permabit customers to leverage existing file systems, volume management and data protection to deliver 4K inline, highly scalable data reduction in their Linux storage environments. Out of the box, VDO supports block, file and object storage on Red Hat Enterprise Linux and is compatible with Red Hat OpenStack, Ceph and Gluster.

“Widespread adoption of public, private and hybrid cloud computing is ushering in a new age of efficiency in IT,” said Tom Cook, Permabit CEO. “Data reduction increases data center density and maximizes cloud efficiency.  By introducing data reduction technologies, like Permabit VDO, on a global basis we could see $1.5 trillion saved in data center build-out, $10 billion saved in power costs and the prevention of 20 million metric tons of carbon emissions by 2020. This not only saves businesses precious capital outlays, it will help save the planet! Embracing efficiency is the only way to get more into your cloud.”

VDO for Hybrid Cloud is currently being evaluated by the world’s largest financial and communications companies as well as large government agencies.  It is available immediately to Permabit storage OEMs and Hybrid Cloud Professional Services partners.

For additional information on VDO 6 visit us at http://www.permabit.com.

Read more

curata__1b778fd225c6d175175f45b2f20b1559.png

Panzura’s cloud integrated storage is a two-way bridge between data centers and the cloud

| Network World
curata__1b778fd225c6d175175f45b2f20b1559.png

Panzura offers what Harr calls Cloud Integrated Storage. It’s a software platform that combines cloud-based storage with on-premises appliances that intelligently place most of a company’s data that is unused in the cloud, and keeps “hot” data that needs to be accessed frequently on premises. Doing so allows organizations to cut down on their on-premises storage footprint – or re-use their existing storage for other purposes – and offload storage capacity to the cloud.

“It’s a cloud or die world,” says Harr, who’s been on the job as CEO of Panzura for two months now. Businesses have an imperative to at the very least explore the economic and cultural advantages of using public cloud storage. But to use the cloud, Harr says enterprises need “on-ramps” and “two-way bridges” to get data to and from the cloud. That’s Panzura.

Read more

curata__c7SIcxktT9aYklf.png

The milestone of Red Hat Ceph Storage 2

| redhatstorage.redhat.com
curata__c7SIcxktT9aYklf.png

This week’s announcement of Red Hat Ceph Storage 2 marks the most significant release of the cloud-native, software-defined storage product since the acquisition of Inktank by Red Hat in 2014. It represents an important milestone, not only in terms of the company’s steadfast commitment to storage but also from the perspective of preparing open source customers for the highly coveted software-defined datacenter.

Why storage matters

We work in an era where storage is often taken for granted, under-glamorized for the role it serves, and yet increasingly essential, often indispensable, to solutions spanning physical, virtual, private cloud, container, and public cloud environments. Nevertheless, in recent studies commissioned by Red Hat, Vanson Bourne Ltd reports that 71% of IT decision makers fear their organizations’ storage solutions won’t be able to handle next-generation workloads, while 451 Research indicates that 57% already have or are moving to software-defined datacenters this year. As most loyal Ceph followers know, this is precisely why folks are so excited about the launch of our new product.

What’s new in 2

While Red Hat Ceph Storage has distinguished itself as a unified storage platform that’s overwhelmingly preferred for OpenStack, for years Ceph has actually fulfilled this role with folks like service providers and the telco community as an object store proven at scale. Red Hat Ceph Storage 2 adds innovation resulting in a far more robust object storage solution for a wide variety of use cases aimed at the enterprise, like active archive, big data, and media content hosting. Customers also receive an easier-to-use product and life-cycle management by virtue of the integrated Red Hat Storage Console 2.

Bottom line

All these features are instrumental to empowering Ceph to keep pace with the ever-growing demands of its spirited user base and to handle multi-petabyte workloads with the grace and efficiency that enterprise customers need for software-defined datacenters. On the surface, they might not appear especially sexy, but for cloud builders and IT decision makers of all sorts, many of whom are already in the loyal Ceph community, they are a breath of fresh air to an otherwise stifled march toward storage infrastructure agility.

Read more

curata__KcMY1qtijkZfTvZ.jpeg

Nutanix Looks to Move Beyond Hyperconverged Space

| eweek.com
curata__KcMY1qtijkZfTvZ.jpeg

Nutanix, which has made its name in providing software for the growing hyperconverged infrastructure market, is looking to take a larger role in the data center.

At the company’s .NEXT Conference 2016 June 21 in Las Vegas, Nutanix officials announced additions to its offerings that are designed to enable enterprises to build out data center environments that give them the tools, flexibility, automation and ease of use that they find in public clouds like Amazon Web Services (AWS) while keeping hold of the security and control of their businesses.

Nutanix is looking to grow from a vendor that sells software into the hyperconverged infrastructure space to one that can enable businesses to run all of their applications and that can connect to other technology platforms, according to Greg Smith, senior director of product and technical marketing at the company.

“Our ambition has long been to move beyond hyperconverged and become a platform for the data center,” Smith told eWEEK. “We’re not offering solutions, but a platform. We want to run all the workloads in the data center.”

Read more

curata__NMHO22Nh7WgXCSY.jpeg

Embracing the Increased Diversity of Storage

| itbusinessedge.com
curata__NMHO22Nh7WgXCSY.jpeg

Of the three pillars of enterprise infrastructure – compute, storage and networking – storage remains the most complex.

I know, processors are still gaining in strength and flexibility and networking is, well, networking, but in terms of options, storage is the most diverse. Do you go all-cloud, all-local, or hybrid? Do you opt for all-Flash or hybrid disk, or even tape, solutions? And then there is the rising cadre of in-memory and server-side solutions that do away with independent storage infrastructure altogether.

One thing is certain: The enterprise will need access to vast amounts of untapped storage in the coming years if it is to have any chance of realizing the benefits of Big Data and the Internet of Things. This may fly in the face of recent market data that has both the price and capacity of storage deployments on the wane, but as IDC noted in its latest quarterly assessment, this has more to do with changing buying patterns than diminishing demand. Sales of large external arrays, which represent the largest market segment, dropped by 3.7 percent, while ODM sales to hyperscale enterprises tumbled nearly 40 percent, which sounds like a lot but is largely in keeping with what has so far been a highly volatile market.

On the upside, however, both Flash-based solutions and server-side deployments are on the rise. According to new data from 451 Research, Flash is now present in 90 percent of enterprises, with more than half having already deployed hybrid SANs and another 30 percent looking to make the move within two years. Perhaps even more significant, 27 percent are running all-Flash arrays and an equal portion is planning to do the same in two years. The biggest barrier, of course, is cost, which is why many organizations are pairing their Flash systems with dedupe and compression to stretch capacity as much as five-fold.

But since storage is at heart a commodity, many enterprises make the mistake of basing their deployment decisions on technology rather than operational criteria like cost and performance. As HPCwire’s Frank Merritt notes, the primary goal for most organizations should be to build storage infrastructure with a low TCO by taking into account not just upfront costs but lifecycle factors as well. Some best practices to abide by are extensive leveraging of legacy infrastructure and deployment of new systems that stress flexibility, ease of use and, most importantly, scalability. Increased modularity is also a key attribute as it improves the value of physical data center space.

The modern storage environment, then, will be vastly different from the monolithic arrays of the past, and even the criteria for evaluating successful storage operations are shifting away from raw capacity to high degrees of flexibility and performance.

The underlying function is still the same – to keep data readily available – but the scale and scope of that challenge is changing dramatically as the enterprise transitions to the digital economy. Traditional storage architectures still have a role to play, but they are no longer the only game in town.

Read more

curata__eYIcd0tu2wYbSBv.jpeg

NetApp SolidFire Officially Announces Docker Volume Plug-In

| Information Technology
curata__eYIcd0tu2wYbSBv.jpeg

NetApp SolidFire today at DockerCon unveiled the NetApp Docker Volume Plug-In, a universal driver for Docker that allows customers to save data across containers so that data continues to be available (even if the container with which it was associated isn’t) for other services that might need it, and for compliance purposes. The plug-in is now available on GitHub.

“The number of use cases for persistent storage alongside containers is ever growing,” says NetApp SolidFire. “Our development work around Docker volume plug-ins makes deploying and managing storage more simple and intuitive. NetApp’s participation in several technology partner programs, including Docker’s Ecosystem Technology Partner program and Mesosphere’s Open DC/OS program, showcase our commitment to the broader container ecosystem.”

Val Bercovici, NetApp SolidFire’s CTO, says the company’s technology makes it easier for cloud providers and enterprises to do cloud-like elastic storage solution without the complexity, overhead, and staff that typically requires. The target buyer of NetApp SolidFire solutions, he adds, tends not to be the storage admin, but rather is more likely to be VMware teams; Open Stack teams; and, increasingly, developers working with containers at scale.

There’s an interesting evolution happening related to the IT operations-led trend of virtualizing apps for increased resource efficiency on servers, adds Bercovici. With Open Stack, newer software defines the hardware it runs on, he says. Now with the container ecosystem (not limited just to Docker), a continuation of that trend is happening in which enterprise teams developing tablet and other apps are just writing code, looking at computing code, the storage tier, CPU resources, pools of memory, etc. NetApp SolidFire  addresses that trend, enabling them to quickly and efficiently spin up containers.

Read more

curata__eEucyuiWAeuBeIV.jpeg

Docker Container Usage Growing

| eweek.com
curata__eEucyuiWAeuBeIV.jpeg

As the DockerCon 16 conference gets underway June 20 in Seattle, users and advocates of the open-source container technology are being bolstered by multiple reports that imply adoption is growing, although there are some challenges to adoption.

Container storage vendor ClusterHQ issued a study on June 16 that included responses from 310 IT decision-makers as part of its “Container Market Adoption Survey 2016” report. Not surprisingly for a study conducted by a container backer, the report found that 79 percent of organizations are running containers. What is somewhat surprising, however, is that 76 percent of respondents indicated that their organizations are now using containers in production, up from only 38 percent in 2015.

Also on June 16, CloudFoundry issued its “Hope Versus Reality: Containers in 2016” report, based on responses from 711 respondents. In contrast to ClusterHQ’s adoption figures, only 22 percent of the CloudFoundry survey respondents said that they are currently using containers. As to why adoption isn’t higher, 45 percent said their biggest deployment worry is that Docker is too complex to integrate into their environments. In addition, 50 percent indicated that container management is a top challenge.

Among the other key highlights of the CloudFoundry report is that 42 percent of respondents see security isolation as a key benefit of container usage.

Docker is a technology that enables organizations to run applications, so what are the actual applications that organizations are running on Docker today? According to Datadog, 21 percent of companies are running Docker Registry, which is an application that enables organizations to host and deliver containerized apps. Twenty percent of Datadog’s users are running the Nginx Web server.

Docker isn’t just a technology used by small companies either. In fact, most adoptions observed by Datadog are by organizations that are monitoring 500 or more server hosts.

“Larger companies are adopting Docker more rapidly than smaller companies,” K Young, director of Strategic Initiatives at Datadog, told eWEEK.

Also of note in the Datadog study is the finding that of the organizations that run Docker, 25 percent will run an average of 10 or more containers simultaneously. In addition, Young said once Datadog classifies a company as using Docker, it has observed that the organization tends to rapidly scale up usage. Datadog found that companies on average will increase their container count fivefold within nine months of first deploying Docker.

“I expect that when we publish new research on Docker adoption again at some point in the future, we’ll see an even larger container count,” Young said. “There is no sign of any slowing down for Docker adoption.”

Read more