curata__zqLw2OK4KZLgvSB.png

Busting the handcuffs of traditional data storage

| SiliconANGLE
curata__zqLw2OK4KZLgvSB.png

Premise

The largest and most successful Web companies in the world have proven a new model for managing and scaling a combined architecture of compute and storage. If you’ve heard it once, you’ve heard it a hundred times: “The hyperscale guys don’t use traditional disk arrays.”

Giants such as Facebook Inc. and Google Inc. use a design of local distributed storage to solve massive data problems. The key differentiation of this new architecture is extreme scalability and simplicity of management, enabled by automation. Over the years, Wikibon has referred to this approach as “Software-led Infrastructure,” which is analogous to so-called Software-Defined Storage.

Excluding the most mission-critical online transaction processing markets served by the likes of Oracle Corp. and IBM Corp.’s DB2, it’s becoming clear this software-led approach is poised to penetrate mainstream enterprises because it is more cost-effective and agile than traditional infrastructure. Up until recently, however, such systems have lacked the inherent capabilities needed to service core enterprise apps.

This dynamic is changing rapidly. In particular, Microsoft Corp. with Azure Stack and VMware Inc. with its vSAN architecture are demonstrating momentum with tightly integrated and automated storage services. Linux, with its open source ecosystem, is the remaining contender to challenge VMware and Microsoft for mainstream adoption of on-premises and hybrid information technology infrastructure, including data storage.

Upending the ‘iron triangle’ of arrays

Peter Burris, Wikibon’s head of research, recently conducted research that found IT organizations suffer from an infrastructure “iron triangle” that is constraining IT progress. According to Burris, the triangle comprises entrenched IT administrative functions, legacy vendors and technology-led process automation.

In his research, Burris identified three factors IT organizations must consider to break the triangle:

  • Move from a technology to a service administration model;
  • Adopt True Private Cloud to enhance real automation and protect intellectual property that doesn’t belong in the cloud; and
  • Elevate vendors that don’t force false “platform” decisions, meaning technology vendors have a long history of  “adding value” by renaming and repositioning legacy products under vogue technology marketing umbrellas.

The storage industry suffers from entrenched behaviors as much as any other market segment. Traditional array vendors are trying to leverage the iron triangle to slow the decline of legacy businesses while at the same time ramping up investments in newer technologies, both organically and through acquisition. The Linux ecosystem –the lone force that slowed down Microsoft in the 1990s – continues to challenge these entrenched IT norms and is positioned for continued growth in the enterprise.

But there are headwinds.

In a recent research note published on Wikibon (login required), analyst David Floyer argued there are two main factors contributing to the inertia of traditional storage arrays:

  • The lack of equivalent functionality for storage services in this new software-led world; and
  • The cost of migration of existing enterprise storage arrays – aka the iron triangle.

Linux, Floyer argues, is now ready to grab its fair share of mainstream, on-premises enterprise adoption directly as a result of newer, integrated functionality that is hitting the market. As these software-led models emerge in an attempt to replicate cloud, they inevitably will disrupt traditional approaches just as the public cloud has challenged the dominant networked storage models such as Storage Area Network and Network-Attached Storage that have led the industry for two decades.

Linux is becoming increasingly competitive in this race because it is allowing practitioners to follow the game plan Burris laid out in his research, namely:

1) Building momentum on a services model – (i.e. delivering robust enterprise storage management services that are integrated into the OS);

2) Enabling these services to be invoked by an orchestration/automation framework (e.g., OpenStack, OpenShift) or directly by an application leveraging microservices (i.e., True Private Cloud); and

3) The vendors delivering these capabilities have adopted an open ecosystem approach (i.e. they’re not forcing false platform decisions, rather they’re innovating and integrating into an existing open platform). A scan of the OpenStack Web site gives a glimpse of some of the customers attempting to leverage this approach.

Floyer’s research explores some of the key services required by Linux to challenge for market leadership, with a deeper look at the importance of data reduction as a driver of efficiency and cost reduction for IT organizations.

Types of services

In his research, Floyer cited six classes of storage service that enterprise buyers have expected, which have traditionally been available only within standalone arrays. He posited that these services are changing rapidly, some with the introduction of replacement technologies and others that will increasingly be integrated into the Linux operating system, which will speed adoption. A summary of Floyer’s list of storage services follows:

  • Cache management to overcome slow hard disk drives which are being replaced by flash (with data reduction techniques) to improve performance and facilitate better data sharing
  • Snapshot Management for improved recovery
  • Storage-level Replication is changing due to the effects of flash and high speed interconnects such as 40Gb or 100Gb links. Floyer cited WANdisco’s Paxos technology and the Simplivity (acquired by HPE) advanced file system as technologies supporting this transformation.
  • Encryption, which has traditionally been confined to disk drives, overhead-intensive and leaves data in motion exposed. Encryption has been a fundamental capability within the Linux stack for years and ideally all data would be encrypted. However encryption overheads have historically been too cumbersome. With the advent of graphics processing units and field-programmable gate arrays from firms such as Nvidia Corp., encryption overheads are minimized enabling end-to-end encryption, with the application and database as the focal point for both encryption and decryption, not the disk drive.
  • Quality of Service, which is available in virtually all Linux arrays but typically only sets a floor under which performance may not dip. Traditional approaches for QoS lack granularity to set ceilings (for example) and allow bursting programmatically through a complete and well-defined REST API (to better service the needs of individual applications – versus a one-size-fits all approach). NetApp Inc.’s Solidfire has, from its early days, differentiated in this manner and is a good example of a true software-defined approach that allows provisioning both capacity and performance dynamically through software. Capabilities like this are important to automate the provisioning and management of storage services at scale, a key criterion to replicate public cloud on-prem.
  • Data Reduction – Floyer points out in his research that there are four areas of data reduction that practitioners should understand, including zero suppression, thin provisioning, compression and data de-duplication. Data sharing is a fifth and more nuanced capability that will become important in the future. According to Floyer:

To date… “The most significant shortfall in the Linux stack has been the lack of an integrated data reduction capability, including zero suppression, thin provisioning, de-duplication and compression.”

According to Floyer, “This void has been filled by the recent support of Permabit’s VDO data reduction stack (which includes all the data reduction components) by Red Hat.”

VDO stands for Virtual Data Optimizer. In a recent conversation with Wikibon, Permabit Chief Executive Tom Cook explained that as a Red Hat Technology partner, Permabit obtains early access to Red Hat software, which allows VDO testing and deep integration into the operating system, underscoring Floyer’s argument.

Why is this relevant? The answer is cost.

The cost challenge

Data reduction is a wonky topic to chief information officers, but the reason it’s so important is that despite the falling cost per bit, storage remains a huge expense for buyers, often accounting for between 15 and 50 percent of IT infrastructure capital expenditures. As organizations build open hybrid cloud architectures and attempt to compete with public cloud offerings, Linux storage must not only be functionally robust, it must keep getting dramatically cheaper.

The storage growth curve, which for decades has marched to the cadence of Moore’s Law, is re-shaping and growing at exponential rates. IoT, M2M communications and 5G will only serve to accelerate this trend.

Data reduction services have been a huge tailwind for more expensive flash devices and are fundamental to reducing costs going forward. Traditionally, the common way Linux customers have achieved efficiencies is to acquire data reduction services (e.g., compression and de-dupe) through an array – which may help lower the cost of the array, but it perpetuates the Iron triangle. And longer-term, it hurts the overall cost model.

As underscored in Floyer’s research, the modern approach is to access sets of services that are integrated into the OS and delivered via Linux within an orchestration/automation framework that can manage the workflow. Some cloud service providers (outside of the hyperscale crowd) are sophisticated and have leveraged open-source services to achieve hyperscalelike benefits. Increasingly, these capabilities are coming to established enterprises via the Linux ecosystem and are achieving tighter integration as discussed earlier.

More work to be done

Wikibon community data center practitioners typically cite three primary areas that observers should watch as indicators of Linux maturity generally and software-defined storage specifically:

  1. The importance of orchestration and automation

To truly leverage these services, a management framework is necessary to understand what services have been invoked, to ensure recovery is in place (if needed) and give confidence that software-defined storage and associated services can deliver consistently in a production environment.

Take encryption as an example along with data reduction. To encrypt you must reduce the data before you encrypt because encryption tries to eliminate the very patterns that, for example, data de-duplication is trying to find. This example illustrates the benefits of integrated services. Specifically, if something goes wrong during the process, the system must have deep knowledge of exactly what happened and how to recover. The ideal solution in this example is to have encryption, de-dupe and compression integrated as a set of services embedded in the OS and invoked programmatically by the application where needed and where appropriate.

2. Application performance

Wikibon believes that replicating hyperscalerlike models on-prem will increasingly require integrating data management features into the OS. Technologists in the Wikibon community indicate that the really high performance workloads will move to a software-led environment leveraging emerging non-volatile memory technologies such as NVMe and NVMf. Many believe the highest performance workloads will go into these emerging systems and over time, eliminate what some call the “horrible storage stack” – meaning the overly cumbersome storage protocols that have been forged into the iron triangle for years. This will take time, but the business value effects could be overwhelming with game-changing performance and low latencies as disruptive to storage as high frequency trading has been to Wall Street — ideally without the downside.

3. Organizational issues

As Global 2000 organizations adopt this new software-led approach, there are non-technology-related issues that must be overcome. “People, process and technology” is a bit of a bromide, but we hear it all the time: “Technology is the easy part…. People and process are the difficult ones.” The storage iron triangle will not be easily disassembled. The question remains: Will the economics of open source and business model integrations such as those discussed here overwhelm entrenched processes and the people who own them?

On the surface, open source services are the most likely candidates to replicate hyperscale environments because of the collective pace of innovation and economic advantages. However, to date, a company such as VMware has demonstrated that it can deliver more robust enterprise services faster than the open-source alternatives — but not at hyper-scale.

History is on the side of open source. If the ecosystem can deliver on its cost, scalability and functionality promises, it’s a good bet that the tech gap will close rapidly and economic momentum will follow. Process change and people skills will likely be more challenging.

(Disclosure: Wikibon is a division of SiliconANGLE Media Inc., the publisher of Siliconangle.com. Many of the companies referenced in this post are clients of Wikibon. Please read my Ethics Statement.)

 

Read more

Hybrid Cloud

Hybrid Cloud Gains in Popularity, Survey Finds

| Light Reading
Hybrid Cloud

The hybrid model of cloud computing is gaining more popularity in the enterprise, as businesses move more workloads and applications to public cloud infrastructures and away from private deployments.

Those are some of the findings from RightScale’s annual “State of the Cloud” report, which the company released Wednesday. It’s based on interviews with 1,000 IT professionals, with 48% of them working in companies with more than 1,000 employees.

The biggest takeaway from the report is that enterprises and their IT departments are splitting their cloud dollars between public and private deployments, and creating demands for a hybrid approach.

“The 2017 State of the Cloud Survey shows that while hybrid cloud remains the preferred enterprise strategy, public cloud adoption is growing while private cloud adoption flattened and fewer companies are prioritizing building a private cloud,” according to a blog post accompanying the report. “This was a change from last year’s survey, where we saw strong gains in private cloud use.”

Specifically, 85% of respondents reported having a multi-cloud, hybrid strategy, and that’s up from the 82% who reported a similar approach in 2016. At the same time, private cloud adoption dropped from 77% in 2016 to 72% in 2017.

In the survey, 41% of respondents reported running workloads in public clouds, while 38% said they run workloads in private clouds. In large enterprises, those numbers reverse, with 32% of respondents running workloads in public clouds, and 43% running workloads within private infrastructures.

“It’s important to note that the workloads running in private cloud may include workloads running in existing virtualized environments or bare-metal environments that have been ‘cloudified,’ ” according to the report.

When it comes to adopting cloud technologies and services, there are less barriers and concerns this year compared to 2016. The lack of resources and expertise to implement a cloud strategy was still the top concern.

In addition the report notes that in every cloud expertise level the Top 5 Challenges” indicate there is a substantial concern with “managing costs”.  One vehicle that can help manage costs is to apply data reduction technologies to your cloud deployment. Permabit VDO can be applied to public and/or private clouds quickly and easily enabling cost reduction of 50% or more in on-premise, in-transit and public cloud deployments.

Read more

curata__sgU8qd9uQLE3KYJ.jpeg

Why 2017 will belong to open source

| CIO News
curata__sgU8qd9uQLE3KYJ.jpeg

A few years ago, open source was the less-glamorous and low-cost alternative in the enterprise world, and no one would have taken the trouble to predict what its future could look like. Fast-forward to 2016, many of us will be amazed by how open source has become the de facto standard for nearly everything inside an enterprise. Open source today is the primary engine for innovation and business transformation. Cost is probably the last reason for an organisation to go in for open source.

An exclusive market study conducted by North Bridge and Black Duck brought some fascinating statistics a few months ago. In the study titled “Future of Open Source”, about 90% of surveyed organisations said that open source improves efficiency, interoperability and innovation. What is even more significant is the finding that the adoption of open source for production environments outpaced the proprietary software for the first time – more than 55% leverage OSS for production infrastructure.

OpenStack will rule the cloud world
OpenStack has already made its presence felt as an enterprise-class framework for the cloud. An independent study, commissioned by SUSE, reveals that 81 percent of senior IT professionals are planning to move or are already moving to OpenStack Private Cloud. What is more, the most innovative businesses and many Fortune 100 businesses have already adopted OpenStack for their production environment.

As cloud becomes the foundation on which your future business will be run, OpenStack gains the upper hand with its flexibility, agility, performance and efficiency. Significant cost reduction is another major consideration for organisations, especially the large enterprises. Because a proprietary cloud platform is excessively expensive to build and maintain and operations of Open Stack deliver baseline cost reductions. In addition data reduction in an Open Stack deployment can further reduce operating costs.

Open source to be at the core of digital transformation
Digital transformation is, in fact, one of the biggest headaches for CIOs because of its sheer heterogeneous and all-pervading nature. With the data at the center of digital transformation, it is often impossible for CIOs to ensure that the information that percolates down is insightful and secure at the same time. They need a platform which is scalable, flexible, allows innovations and is quick enough to turn around. This is exactly what Open Source promises. Not just that, with the current heterogeneous environments that exist in enterprises, interoperability is going to be the most critical factor.

Technologies like Internet of Things (IoT) and SMAC (social, mobile, analytics and cloud) will make data more valuable and voluminous. The diversity of devices and standards that will emerge will make open source a great fit for enterprises to truly leverage these trends. It is surprising to know that almost all ‘digital enterprises’ in the world are already using open source platforms and tools to a great extent. The pace of innovation that open source communities can bring to the table is unprecedented.

Open source-defined data centers
A recent research paper from IDC states that 85 percent of the surveyed enterprises globally consider open source to be the realistic or preferred solution for migrating to software-defined infrastructure (SDI). IDC also recommends to avoiding vendor lock-in by deploying open source solutions. Interestingly, many organisations seem to have already understood the benefits of open source clearly, with Linux adoption in the data centers growing steadily at a pace of 15-20%.

The key drivers of SDI – efficiency, scalability and reliability at minimal investment – can be achieved only with the adoption of open source platforms. Open source helps the enterprises to be agiler in building, deploying and maintaining applications. In the coming days, open source adoption is going to be essential for achieving true ‘zero-downtime’ in Software-Defined-Infrastructure.

The open source will have specifically large role to play in the software-defined-storage (SDS) space. It will help organisations in overcoming the current challenges associated with SDS. Open SDS solutions can scale infinitely without a need to refresh the entire platform or disrupt the existing functioning environment.

Data Reduction will easily be added to SDS or OS environments with Permabit VDO. A simple plug and play approach that will enable 2X or more storage reduction will add to the already efficient operations of open source deployments.

Open source to be decisive in enterprise DevOps journey
Today, software and applications have a direct impact on business success and performance. As a reason, development, testing, delivery, and maintenance of applications have become very crucial. In the customer-driven economy, it is imperative for organisations to have DevOps and containerisation technologies to increase release cycles and quality of applications.

Often, enterprises struggle to get the most out of DevOps model. The investment associated with replicating the production environments for testing the apps is not negligible. They also fail to ensure that the existing systems are not disturbed while running a testing environment within containers.

Industry analysts believe that microservices running in Docker-like containers, on an open and scalable cloud infrastructure are the future of applications. OpenStack-based cloud infrastructures are going to be an absolute necessity for enterprises for a successful DevOp journey. The flexibility and interoperability apart, the open cloud allows the DevOps team to reuse the same infrastructure as and when containers are created.

In 2017, it is expected to see open source becoming the first preference for organisations that are at the forefront of innovation.

Read more

mainframe

Enterprise storage in 2017: trends and challenges

| Information Age
mainframe

Information Age previews the storage landscape in 2017 – from the technologies that businesses will implement to the new challenges they will face.

The enthusiastic outsourcing to the cloud by enterprise CIOs in 2016 will start to tail off in 2017, as finance directors discover that the high costs are not viable long-term. Board-level management will try to reconcile the alluring simplicity they bought into against the lack of visibility into hardware and operations.

As enterprises attempt to solve the issue of maximising a return for using the cloud, many will realise that the arrangement they are in may not be suitable across the board and seek to bring some of their data back in-house.

It will sink in that using cloud for small data sets can work really well in the enterprise, but as soon as the volume of data grows to a sizeable amount, the outsourced model becomes extremely costly.

Enterprises will extract the most value from their IT infrastructures through hybrid cloud in 2017, keeping a large amount of data on-premise using private cloud and leveraging key aspects of public cloud for distribution, crunching numbers and cloud compute, for example.

‘The combined cost of managing all storage from people, software and full infrastructure is getting very expensive as retention rates on varying storage systems differ,’ says Matt Starr, CTO at Spectra Logic. ‘There is also the added pressure of legislation and compliance as more people want or need to keep everything forever.

‘We predict no significant uptick on storage spend in 2017, and certainly no drastic doubling of spend,’ says Starr. ‘You will see the transition from rotational to flash. Budgets aren’t keeping up with the rates that data is increasing.’

The prospect of a hybrid data centre will, however, trigger more investment eventually. The model is a more efficient capacity tier based on pure object storage at the drive level and above this a combination of high-performance HDD (hard disk drives) and SSD (solid state drives).

Hybrid technology has been used successfully in laptops and desktop computers for years, but it’s only just beginning to be considered for enterprise-scale data centres.

While the industry is in the very early stages of implementing this new method for enterprise, Fagan expects 70% of new data centres to be hybrid by 2020.

‘This is a trend that I expect to briskly pick up pace,’ he says. ‘As the need for faster and more efficient storage becomes more pressing, we must all look to make smart plans for the inevitable data.

One “must have” is data reduction technologies. By applying data reduction to the software stack data density, costs and efficiency will improve.  If Red Hat Linux is part of your strategy, deplpoying Permabit VDO data reduction is as easy as plug in and go. Reducing storage consumption, data center footprint and operating costs will drop by 50% or more.

 

Read more

curata__94268666bb6689de67da501a638be5e2.PNG

Filling the Linux Data Reduction Gap

| Storage Swiss
curata__94268666bb6689de67da501a638be5e2.PNG

Most data centers consider data reduction a “must have” feature for storage. The data reduction software should be able to deliver its capabilities without compromising performance. While many storage systems and a few operating systems now include data reduction as a default, Linux has been curiously absent. While some data reduction solutions exist in the open source community they typically either don’t perform well or don’t provide the level of reduction of proprietary solutions. In short, Linux has a data reduction gap. Permabit is moving rapidly fill this gap by bringing its data reduction solutions to Linux.

About Permabit

Permabit is a leader in the data reduction market. Their software Virtual Data Optimizer (VDO) is used by many traditional storage vendors to speed their time to market with a data reduction capability. The company’s VDO software provides software based deduplication, compression and thin provisioning. It has been used by a variety of vendors ranging from all-flash array suppliers to disk backup solutions. The primary focus of the Permabit solution has always been to provide data reduction with no noticeable impact on performance.

Widespread Linux Support

The first step into Linux for Permabit was to release VDO with support for Red Hat Enterprise Linux. That release was followed up by the additional support of Ubuntu Linux from Canonical. Recently, as Storage Switzerland covered in its briefing note “Open Software Defined Storage needs Data Reduction“, the VDO solution was certified for use with Red Hat CEPH and Gluster. In addition, Linux open source high availability and geo-clustering replication vendor, LINBIT announced that the two companies worked together to ensure that VDO, works with LINBIT’s DRBD to minimize bandwidth requirements for HA and DR.

Enhanced Business Model

Permabit’s traditional go to market strategy with VDO was through Original Equipment Manufacturers (OEMs). Before standardizing on VDO, these OEMs heavily tested the product, making it potentially the most well vetted data reduction solution on the market today. But the solution was only available to OEMs, not directly to end-user data centers and cloud providers. Linux customers wanting Permabit’s data reduction required an easy way to get to the software. As a result Permabit’s most recent announcement is an expansion of their go to market strategy to include direct access to the solution.

StorageSwiss Take

Data reduction is table stakes for the modern storage infrastructure. At the same time Linux based software defined storage solutions are seeing a rapid increase in adoption, even in non-Linux data centers. Permabit’s VDO fills the data reduction gap and now gives these storage solutions a competitive advantage.

Read more

curata__fAxJs4Iuwd4elLr.png

Why the operating system matters even more in 2017

| Homepage
curata__fAxJs4Iuwd4elLr.png

Operating systems don’t quite date back to the beginning of computing, but they go back far enough. Mainframe customers wrote the first ones in the late 1950s, with operating systems that we’d more clearly recognize as such today—including OS/360 from IBM and Unix from Bell Labs—following over the next couple of decades.

An operating system performs a wide variety of useful functions in a system, but it’s helpful to think of those as falling into three general categories.

First, the operating system sits on top of a physical system and talks to the hardware. This insulates application software from many hardware implementation details. Among other benefits, this provides more freedom to innovate in hardware because it’s the operating system that shoulders most of the burden of supporting new processors and other aspects of the server design—not the application developer. Arguably, hardware innovation will become even more important as machine learning and other key software trends can no longer depend on CMOS process scaling for reliable year-over-year performance increases. With the increasingly widespread adoption of hybrid cloud architectures, the portability provided by this abstraction layer is only becoming more important.

Second, the operating system—specifically the kernel—performs common tasks that applications require. It manages process scheduling, power management, root access permissions, memory allocation, and all the other low-level housekeeping and operational details needed to keep a system running efficiently and securely.

Finally, the operating system serves as the interface to both its own “userland” programs—think system utilities such as logging, performance profiling, and so forth—and applications that a user has written. The operating system should provide a consistent interface for apps through APIs (application programming interface) based on open standards. Furthermore, commercially supported operating systems also bring with them business and technical relationships with third-party application providers, as well as content channels to add other trusted content to the platform.

The computing technology landscape has changed considerably over the past couple of years. This has had the effect of shifting how we think about operating systems and what they do, even as they remain as central as ever. Consider changes in how applications are packaged, the rapid growth of computing infrastructures, and the threat and vulnerability landscape.

Containerization

Applications running in Linux containers are isolated within a single copy of the operating system running on a physical server. This approach stands in contrast to hypervisor-based virtualization in which each application is bound to a complete copy of a guest operating system and communicates with the hardware through the intervening hypervisor. In short, hypervisors virtualize the hardware resources, whereas containers virtualize the operating system resources. As a result, containers consume few system resources, such as memory, and impose essentially no performance overhead on the application.

Scale

Another significant shift is that we increasingly think in terms of computing resources at the scale point of the datacenter rather than the individual server. This transition has been going on since the early days of the web, of course. However, today we’re seeing the reimagining of high-performance computing “grid” technologies both for traditional batch workloads as well as for newer services-oriented styles.

Dovetailing neatly with containers, applications based on loosely coupled “microservices” (running in containers)—with or without persistent storage—are becoming a popular cloud-native approach. This approach, although reminiscent of Service Oriented Architecture (SOA), has demonstrated a more practical and open way to build composite applications. Microservices, through a fine-grained, loosely coupled architecture, allows for an application architecture to reflect the needs of a single well-defined application function. Rapid updates, scalability, and fault tolerance, can all be individually addressed in a composite application, whereas in traditional monolithic apps it’s much more difficult to keep changes to one component from having unintended effects elsewhere.

Security

All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation in a containerized and software-defined infrastructure world than in the case in which dedicated hardware or other software may be handling some of those tasks. Linux has been the beneficiary of a comprehensive toolbox of security-enforcing functionality built using the open source model, including SELinux for mandatory access controls, a wide range of userspace and kernel-hardening features, identity management and access control, and encryption.

Some things change, some don’t

Priorities associated with operating system development and operation have certainly shifted. The focus today is far more about automating deployments at scale than it is about customizing, tuning, and optimizing single servers. At the same time, there’s an increase in both the pace and pervasiveness of threats to a no longer clearly-defined security perimeter—requiring a systematic understanding of the risks and how to mitigate breaches quickly.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, much more robust, and much more lightweight. Their placement, provisioning, and securing must become more automated. But they still need to run on something. Something solid. Something open. Something that’s capable of evolving for new requirements and new types of workloads. And that something is a (Linux) operating system.

Read more

curata__0lq7FVrFBRdgESp.jpeg

Permabit Hits New Milestone in 2016 by Delivering the First Complete Data Reduction for Linux

| PR Newswire
curata__0lq7FVrFBRdgESp.jpeg

Permabit Technology Corporation, the data reduction experts, brought complete storage efficiency to Linux in 2016.  The company’s Virtual Data Optimizer (VDO) software now delivers advanced deduplication, HIOPS Compression® and fine-grained thin provisioning directly to data centers as they struggle to address the storage density challenges driven by the growth of Big Data and widespread cloud adoption. VDO’s modular, operating system-centric approach addresses the widest possible range of use cases from OLTP to backup workloads, in single system or distributed environments.

Chronologically in 2016, Permabit announced:

  • Availability of VDO for Data Centers on Red Hat Enterprise Linux
  • A partnership with LINBIT delivering bandwidth optimized offsite replication
  • Support for VDO for Ubuntu Linux from Canonical
  • A partnership with AHA Products Group to support development of advanced data compression solutions for Linux
  • Permabit partnership with Red Hat and qualification of VDO with Red Hat Ceph Storage and Red Hat Gluster Storage
  • A reseller agreement with Permabit Integration Partner CalSoft Pvt. Ltd.

“General purpose data reduction has vast potential to control data center costs in the Linux ecosystem,” noted Lynn Renshaw of Peerless Business Intelligence. “In 2016, a single square foot of high density data center space costs up to $3,000 to build and consumes up to 11.4kW hours of power per year at an average cost of $1,357 per square foot in the USA.  Roughly 1/3rd of all data center space is consumed by storage. Left unchecked, IDC estimates worldwide data center space will total 1.94 billion square feet by 2020.  Data reduction (such as Permabit VDO) can reduce data center storage footprints as much as 75%.  This technology alone could save $1.5 Trillion dollars in future data center build out costs.”

Permabit generated great excitement in 2016 with VDO’s ability to lower the TCO of software defined storage (SDS) solutions.  After witnessing the success of public cloud open source storage, many private data center operations teams began implementing software-based storage in 2016.  Following the public cloud providers, private data centers embraced the huge economic advantages of vendor neutrality, hardware independence, and increased utilization that comes from SDS, while still customizing for their own unique business requirements.  VDO’s modular, operating system-centric approach is deployed seamlessly which is why major private data centers wrapped up successful evaluations of VDO within SDS solutions.

According to Permabit CEO and President Tom Cook, “Dramatic changes in the storage industry over the past year have resulted in Permabit expanding from our traditional OEM-only business model, to one positioned to address today’s software-defined data center requirements.  As we looked at worldwide deployments of software defined infrastructure, we realized that Linux is at the center of innovation in the space.  Because of this, as the only complete Linux data reduction solution, VDO is uniquely positioned to radically alter storage economics, drastically reducing TCO.  With our expanded business model, immediate benefits can be realized across today’s Linux-based software-defined environments.”

Read more

curata__RBqgW7yFNqS6E6e.png

More than 50 Percent of Businesses Not Leveraging Public Cloud

| tmcnet.com
curata__RBqgW7yFNqS6E6e.png

While more than 50 percent of respondents are not currently leveraging public cloud, 80 percent plan on migrating more within the next year, according to a new study conducted by TriCore Solutions, the application management experts. As new streams of data are continuing to appear, from mobile apps to artificial intelligence, companies in the future will rely heavily on cloud and digital transformation to minimize complexity.

Here are some key results from the survey:

  • Public Cloud Considerations: Cloud initiatives are underway for companies in the mid-market up through the Fortune 500, though IT leaders continue to struggle with what to migrate, when to migrate, and how best to execute the process. More than half of those surveyed plan to migrate web applications and development systems to the public cloud in the next year, prioritizing these over other migrations. More than two thirds have 25 percent or less of their infrastructure in the public cloud, showing that public cloud still has far to go before it becomes the prevailing environment that IT leaders must manage. With increasingly complex hybrid environments, managed service providers will become a more important resource to help facilitate the process.
  • Running Smarter Still on Prem: Whether running on Oacle EBS, PeopleSoft or, companies rely on ERP systems to run their businesses. Only 20 percent of respondents expect to migrate ERP systems to public cloud in the next year, indicating the importance of hybrid cloud environments for companies to manage business-critical operations on premise alongside other applications and platforms in the public cloud.
  • Prepping for Digital Transformation: With the increased amount of data in today’s IT environment – from machine data to social media data to transactional data and everything in between – the need for managed service providers to make sense of it all has never been more important. 53 percent of respondents plan on outsourcing their IT infrastructure in the future, and respondents anticipate a nearly 20 percent increase in applications being outsourced in the future, as well.

As worldwide spending on cloud continues to grow, and with the increased amount of data in today’s IT environment, IT leaders need to heavily consider the keys to IT success when migrating to a cloud-based environment. Understanding how to help businesses unlock and leverage the endless data available to them, will drive IT success for managed service providers in 2017 and beyond.

 

Read more

curata__AJLNA97OQToCJN4.png

Digitally Advanced Traditional Enterprises Are Eight Times More Likely to Grow Share

| Stock Market
curata__AJLNA97OQToCJN4.png

Bain & Company and Red Hat (NYSE: RHT), the world’s leading provider of open source solutions,today released the results of joint research aimed at determining how deeply enterprises are committed to digital transformation and the benefits these enterprises are seeing. The research report, For Traditional Enterprises, the Path to Digital and the Role of Containers, surveyed nearly 450 U.S. executives, IT leaders and IT personnel across industries and found that businesses that recognize the potential for digital disruption are looking to new digital technologies – such as cloud computing and modern app development – to increase agility and deliver new services to customers while reducing costs. Yet, strategies and investments in digital transformation are still in their earliest stages.

For those survey respondents that have invested in digital, the technology and business results are compelling. Bain and Red Hat’s research demonstrates that those using new technologies to digitally transform their business experienced:

  • Increased market share. These enterprises are eight times more likely to have grown their market share, compared to those in the earliest stages of digital transformation.
  • Delivery of better products in a more timely fashion through increased adoption of emerging technologies – as much as three times faster than those in the earlier stages of digital transformation.
  • More streamlined development processes, more flexible infrastructure, faster time to market and reduced costs by using containers for application development.

Despite the hype, however, even the most advanced traditional enterprises surveyed still score well below start-ups and emerging enterprises that have embraced new technologies from inception (digital natives). According to the survey results, nearly 80 percent of traditional enterprises score below 65 on a 100-point scale that assesses how these organizations believe they are aligning digital technologies to achieve business outcomes. Ultimately, the report reveals that the degree of progress among respondents moving towards digital transformation varies widely, driven in part by business contexts, actual IT needs and overall attitudes towards technology. It also uncovers some common themes in the research.

As companies progress on their digital adoption journey, they typically invest in increasingly more sophisticated capabilities in support of their technology and business goals. The use of modern application and deployment platforms represents the next wave of digital maturity and is proving to be key in helping companies address their legacy applications and infrastructure.

Containers are one of the most high-profile of these development platforms and a technology that is helping to drive digital transformation within the enterprise. Containers are self-contained environments that allow users to package and isolate applications with their entire runtime dependencies – all of the files necessary to run on clustered, scale-out infrastructure. These capabilities make containers portable across many different environments, including public and private clouds.

While the opportunities created by these emerging technologies are compelling, the speed and path of adoption for containers is somewhat less apparent, according to the Bain and Red Hat report. The biggest hurdles standing in the way of widespread container use according to respondents are common among early stage technologies – lack of familiarity, talent gaps, hesitation to move from existing technology and immature ecosystems – and can often be overcome in time. Vendors are making progress to address more container-specific challenges, such as management tools, applicability across workloads, security and persistent storage, indicating decreasing barriers to adoption.

Read more

curata__0829fcdbe39ad1fd38e48b6c0a10a2e2.PNG

Permabit pulls on Red Hat, opens arms for a Linux cuddle

| The Register
curata__0829fcdbe39ad1fd38e48b6c0a10a2e2.PNG

Crimson headcover kernel gets dedupe and compression

The Mad Hatter of Linux is getting Alice in Wonderland style physical space virtualisation with thin provisioning, compression and deduplication courtesy of Permabit.

Building on its June 2016 Linux availability announcement, Permabit and Red Hat have a go-to-market partnership based on the former’s Albireo deduplication and HIOPS compression technology being added as a kernel module to Red Hat Linux. Up until now dedupe and compression have largely been storage array features, and then appearing in software-only storage running on servers and direct-attached disks, SSDs or JBODs.

Against that background Permabit has had somewhat limited success as a supplier to OEMs of its Virtual Data Optimizer (VDO) dedupe and compression technology, potential customers largely preferring to build their own dedupe tech. Its most prominent OEM is probably HDS for file storage, via its BlueArc acquisition. Now that RHEL, via Permabit’s VDO, has its own kernel-level dedupe and compression that means any attached storage can get the benefit of it.

Permabit CEO Tom Cook is especially keen on the COLO angle here. Take a cloud service provider or general colocation operator fitting up their facility with racks of Linux-running servers and storage trays. If they can somehow reduce their storage capacity by, say 25 per cent for a year, and then 25 per cent for the next year and so on, then that removes a significant slug of cost from their annual budgets; that’s the way Cook sees it and he has spreadsheet models and charts to backup his case.

Here’s a chart for a Linux Ceph storage setup, assuming a 2.5:1 data reduction rate and suggesting savings of $370,000 over 5 years with Permabit data reduction installed:

Permabit_Linux_Ceph_savings

Permabit’s VDO runs anywhere RHEL runs – in physical servers, in virtual ones and in the public cloud – and enables Red Hat to compete against suppliers of deduping server operating systems, virtual server/storage systems, OpenStack and deduping storage arrays, according to Permabit. It typically provides 2.5:1 data reduction for unstructured data and up to 10:1 reduction for VM images.

VDO works with Ceph and Gluster and it’s payable via a subscription license starting at $199/year for 16TB. It’s available through Permabit resellers and system integrators. ®

Read more