Data Efficiency in the News

curata__mv01Wd5tVv1l8G2.png

2016 Review Shows $148 billion Cloud Market Growing at 25% Annually

| News articles, headlines, videos
curata__mv01Wd5tVv1l8G2.png

  New data from Synergy Research Group shows that across six key cloud services and infrastructure market segments, operator and vendor revenues for the four quarters ending September 2016 reached $148 billion, having grown by 25% on an annualized basis. IaaS & PaaS services had the highest growth rate at 53%, followed by hosted private cloud infrastructure services at 35% and enterprise SaaS at 34%. 2016 was notable as the year in which spend on cloud services overtook spend on cloud infrastructure hardware and software. In aggregate cloud service markets are now growing three times more quickly than cloud infrastructure hardware and software. Companies that featured the most prominently among the 2016 market segment leaders were Amazon/AWS, Microsoft, HPE, Cisco, IBM, Salesforce and Dell EMC.

Over the period Q4 2015 to Q3 2016 total spend on hardware and software to build cloud infrastructure exceeded $65 billion, with spend on private clouds accounting for over half of the total but spend on public cloud growing much more rapidly. Investments in infrastructure by cloud service providers helped them to generate almost $30 billion in revenues from cloud infrastructure services (IaaS, PaaS, hosted private cloud services) and over $40 billion from enterprise SaaS, in addition to supporting internet services such as search, social networking, email and e-commerce. UCaaS, while in many ways a different type of market, is also growing steadily and driving some radical changes in business communications.

“We tagged 2015 as the year when cloud became mainstream and I’d say that 2016 is the year that cloud started to dominate many IT market segments,” said Synergy Research Group’s founder and Chief Analyst Jeremy Duke. “Major barriers to cloud adoption are now almost a thing of the past, especially on the public cloud side. Cloud technologies are now generating massive revenues for technology vendors and cloud service providers and yet there are still many years of strong growth ahead.”

One way to improve the density and cost effectiveness of cloud deployments is to include scalable high performance data reduction technologies. If you are using Red Hat Enterprise Linux including Permabit Virtual Data Optimizer (VDO) will drop costs by 50% or more and improve data density too!  

Read more

curata__786f474e7cfcfa6abc931fb2df8fe106.png

Permabit VDO Delivering Performance on Samsung NVMe PCIe SSDs

| storagenewsletter.com
curata__786f474e7cfcfa6abc931fb2df8fe106.png

Permabit Technology Corporation announced that its Virtual Data Optimizer (VDO) software for Linux has exceeded the 8GB/s performance throughput barrier for inline compression.

This was accomplished running on a single Samsung Electronics Co., Ltd. NVMe all-flash reference design node.

The latest version of VDO’s HIOPs compression has been optimized to take advantage of today’s multi-core, multi-processor, scale-out architectures to deliver performance in enterprise storage.

To demonstrate this level of performance, the company combined VDO with Red Hat Ceph Storage software and 24 480GB Samsung PM953 U.2 NVMe PCIe SSDs running on the Samsung NVMe reference design platform. Samsung Electronics is offering U.2 Gen 3 X4 NVMe PCIe SSDs. The PM953 that was used in the testing also features 9W TDP (Total Dissipated Power) and a z-height of 7mm.

The resulting reference architecture delivered single-node performance of over 8GB/s read and 3.6GB/s write performance under workloads generated by Ceph RADOS bench. These results are more than twice as fast as published compression performance numbers by proprietary single node storage arrays and were achieved without the use of hardware acceleration boards.

Today’s data center managers are increasingly turning to architectures built around software-defined storage (SDS) to provide highly scalable solutions that control costs. SDS solutions (such as Red Hat Ceph Storage and Red Hat Gluster Storage) must be able to handle enterprise workloads such as databases, virtual servers and virtual desktops as well as, or better than, the proprietary systems that they are meant to replace. While data compression reduces storage costs, one challenge up until now, has been finding a compression approach that could run at high-end enterprise speeds, on standard hardware in an open Linux environment. HIO/s compression technology, incorporated into VDO, addresses these requirements because it serves as a core service of the OS. Any SDS solution that runs on that OS can then scale out to support petabyte-sized deployments.

 

Read more

curata__72Bb2tTnqExjWmD.jpeg

Permabit VDO Delivers Record-setting Performance on Samsung’s NVMe Reference Design Platform

| Stock Market
curata__72Bb2tTnqExjWmD.jpeg

Permabit VDO Delivers Record-setting Performance on Samsung’s NVMe Reference Design Platform CAMBRIDGE, Mass. , Dec. 21, 2016 /PRNewswire/ — Permabit Technology Corporation , the data reduction experts, announced today that its Virtual Data Optimizer (VDO) software for Linux has exceeded the 8GB/s performance throughput barrier for inline compression. This was accomplished running on a single Samsung NVMe All-Flash Reference Design node.

The latest version of VDO’s HIOPS Compression has been optimized to take advantage of today’s multi-core, multi-processor, scale-out architectures to deliver maximum performance in enterprise storage. To demonstrate this level of performance, Permabit combined VDO with Red Hat Ceph Storage software and 24 480GB Samsung PM953 U.2 NVMe PCIe SSDs (solid state drives) running on the Samsung NVMe Reference Design platform. Samsung Electronics is one of the first companies to offer U.2 Gen 3 X4 NVMe PCIe SSDs. The PM953 that was used in the testing also features nine watts TDP (Total Dissipated Power) and a Z-height of 7mm.

The resulting reference architecture delivered single-node performance of over 8 GB/s read and 3.6GB/s write performance under workloads generated by Ceph RADOS bench. These results are more than twice as fast as published compression performance numbers by proprietary single node storage arrays and were achieved without the use of hardware acceleration boards.

Today’s data center managers are increasingly turning to architectures built around Software-Defined Storage (SDS) to provide highly scalable solutions that control costs. SDS solutions (such as Red Hat Ceph Storage and Red Hat Gluster Storage) must be able to handle enterprise workloads such as databases, virtual servers and virtual desktops as well as, or better than, the proprietary systems that they are meant to replace.  While data compression greatly reduces storage costs, one challenge up until now, has been finding a compression approach that could run at high-end enterprise speeds, on standard hardware in an open Linux environment. HIOPS compression technology, incorporated into VDO, addresses all of these requirements because it serves as a core service of the OS. Any SDS solution that runs on that OS can then scale out to support petabyte-sized deployments.

“Previous systems relied on proprietary hardware acceleration based on ASICs or FPGAs to deliver a similar level of performance. Permabit Labs has demonstrated for the first time that HIOPS compression can be achieved with industry-standard processors and platforms,” said Louis Imershein, VP Product for Permabit Technology Corporation. “We’re looking forward to also leveraging the full multi-node, scale-out capabilities of the Red Hat Ceph storage platform as we test further in 2017.”

 

 

Read more

curata__94268666bb6689de67da501a638be5e2.PNG

Filling the Linux Data Reduction Gap

| Storage Swiss
curata__94268666bb6689de67da501a638be5e2.PNG

Most data centers consider data reduction a “must have” feature for storage. The data reduction software should be able to deliver its capabilities without compromising performance. While many storage systems and a few operating systems now include data reduction as a default, Linux has been curiously absent. While some data reduction solutions exist in the open source community they typically either don’t perform well or don’t provide the level of reduction of proprietary solutions. In short, Linux has a data reduction gap. Permabit is moving rapidly fill this gap by bringing its data reduction solutions to Linux.

About Permabit

Permabit is a leader in the data reduction market. Their software Virtual Data Optimizer (VDO) is used by many traditional storage vendors to speed their time to market with a data reduction capability. The company’s VDO software provides software based deduplication, compression and thin provisioning. It has been used by a variety of vendors ranging from all-flash array suppliers to disk backup solutions. The primary focus of the Permabit solution has always been to provide data reduction with no noticeable impact on performance.

Widespread Linux Support

The first step into Linux for Permabit was to release VDO with support for Red Hat Enterprise Linux. That release was followed up by the additional support of Ubuntu Linux from Canonical. Recently, as Storage Switzerland covered in its briefing note “Open Software Defined Storage needs Data Reduction“, the VDO solution was certified for use with Red Hat CEPH and Gluster. In addition, Linux open source high availability and geo-clustering replication vendor, LINBIT announced that the two companies worked together to ensure that VDO, works with LINBIT’s DRBD to minimize bandwidth requirements for HA and DR.

Enhanced Business Model

Permabit’s traditional go to market strategy with VDO was through Original Equipment Manufacturers (OEMs). Before standardizing on VDO, these OEMs heavily tested the product, making it potentially the most well vetted data reduction solution on the market today. But the solution was only available to OEMs, not directly to end-user data centers and cloud providers. Linux customers wanting Permabit’s data reduction required an easy way to get to the software. As a result Permabit’s most recent announcement is an expansion of their go to market strategy to include direct access to the solution.

StorageSwiss Take

Data reduction is table stakes for the modern storage infrastructure. At the same time Linux based software defined storage solutions are seeing a rapid increase in adoption, even in non-Linux data centers. Permabit’s VDO fills the data reduction gap and now gives these storage solutions a competitive advantage.

Read more

Software Defined Storage Market CAGR of 31.62% during 2016-2020

| Press releases

Software Defined Storage Market report focuses on the major drivers and restraints for the key players. Software Defined Storage Industry research report also provides granular analysis of the market share, segmentation, revenue forecasts and geographic regions of the market.  The Software Defined Storage market research report is a professional and in-depth study on the current state of Software Defined Storage Industry.

Software-defined storage (SDS) is a central part of the software-defined data center (SDDC), which is an emerging information technology (IT). It has been designed for organizations to store business data and to enable the speedy delivery of IT services. In an SDS framework, storage services are delivered as a software layer that can be abstracted from the underlying hardware. SDS solutions eliminate the need for manual configuration of the operational process in the SDDC; these solutions accomplish the objective by according control to the software through the separation of the underlying hardware.

Key Vendors of Software Defined Storage Market:

• EMC

• HP

• IBM

• VMware

Market Drivers of Software Defined Storage:

• Effective management of unstructured data

• For a full, detailed list, view our report

Market Challenge of Software Defined Storage:

• Risk of data privacy and security breach

• For a full, detailed list, view our report

Market Trend of Software Defined Storage:

• Rise of OpenStack

• Emergence of hyper-convergence technology

• Upsurge of open-source SDS for containers

• Increase in cloud computing use

• Emergence of VSAs

SDS continues to evolve and will be further enabled by the adoption of open-stack and open-source for containers.  These will broaden the reach of SDS as well as improve economics.

 

Read more

curata__GXxrEEiJB9Cj0fi.png

IT Priorities 2017: Hybrid cloud set to dominate datacentre infrastructure buying decisions

| ComputerWeekly.com
curata__GXxrEEiJB9Cj0fi.png

The 2017 Computer Weekly/TechTarget IT Priorities poll suggests the next 12 months will see enterprise IT buyers move to increase the hybrid-readiness of their datacentre facilities.

Connecting on-premise datacentre assets to public cloud resources will be a top investment priority for UK and European IT decision makers in 2017, research suggests.

According to the findings of the 2017 Computer Weekly/Tech Target IT Priorities survey, readying their on-premise infrastructure for hybrid cloud has been voted the number one datacentre investment priority by IT decision makers across the continent.

With enterprises increasingly looking to tap into off-premise resources, the hybrid cloud is often seen as a delivery model that will enable them to do that while making the most of their existing datacentre investments.

All of the major cloud providers – including Microsoft, Google and Amazon Web Services (AWS) – have spent a large portion of 2016 setting out their enterprise hybrid cloud strategies for this reason.

According analyst Gartner, 2017 is also likely to see an uptick in enterprises looking to manage public, private and hybrid cloud resources from a multitude of providers, as their digital transformation efforts in this area continue to mature and evolve.

“While public cloud usage will continue to increase, the use of private cloud and hosted private cloud services is also expected to increase at least through 2017,” said Gartner.

“The increased use of multiple public cloud providers, plus growth in various types of private cloud services, will create a multicloud environment in most enterprises and a need to coordinate cloud usage using hybrid scenarios.”

The annual Computer Weekly/TechTarget poll shines a light on the investment plans of European IT managers over the coming year, and more than 1,000 of them (including 322 from the UK) took part in the 2017 survey, with cloud – overall – set to be a keen area of focus for the majority over the course of the next 12 months.

Just over a quarter (27.8%) of respondents said they anticipate their IT budget will either remain the same or increase (38.1%) in 2017, and cloud will be responsible for consuming a growing proportion of their annual spend.

Indeed, cloud services was namechecked in both the UK and European versions of the poll as the number one area IT decision makers expect to see an increase in budget for during 2017.

The results are in line with analyst predictions for the year ahead, with market watcher 451 Research’s latest Voice of the enterprise report stating that enterprises will increase their IT budget spend on cloud from 28% this year to 34% in 2017.

Outside of the infrastructure space, the 2017 IT Priorities survey also revealed that software-as-a-service (SaaS) looks set to be the most highly favoured application deployment model in 2017, with just over half (57%) of UK respondents voting for it. This was the same for the European version, where 46.9% of pollsters voted SaaS.

Hybrid cloud environments were voted the second most popular location for application deployment in both versions of the poll, reinforcing earlier findings from the report that suggest building out capabilities in this area will be a top priority for enterprise IT leaders in 2017.

 

Read more

curata__lqlZXnbuUOXzKCW.jpeg

Who needs traditional storage anymore?

| Gigaom
curata__lqlZXnbuUOXzKCW.jpeg

The traditional enterprise storage market is declining and there are several reasons why. Some of them are easier to identify than others, but one of the most interesting aspects is that there’s a radicalization in workloads, hence storage requirements.

Storage as we know it, SAN or NAS, will become less relevant in the future. We’ve already had a glimpse of it from Hyperconvergence, but this kind of infrastructure is trying to balance all the resources – at the expense of overall efficiency sometimes – and they are more compute-driven than data-driven. Data intensive workloads have different requirements and need different storage solutions.

The Rise of Flash

All-flash systems are gaining in popularity, and are more efficient than hybrid and all-disk counterparts. Inline compression and deduplication, for example, are much more viable on a Flash based system than on others, making it easier to achieve better performance even from the smallest of configurations. This means doing more with less.

At the same time, All-flash allows for a better performance and lower latency and, even more important, the latter is much more consistent and predictable over time.

The Rise of Objects

At the same time, what I’ve always described as “Flash & Trash” is actually happening. Enterprises are implementing large scale capacity-driven storage infrastructures to store all the secondary data. I’m quite fond of object storage, but there are several ways of tackling it and the common denominators are scale-out, software-defined and commodity hardware to get the best $/GB.

Sometimes, your capacity tier could be the cloud (especially for smaller organizations with small amounts of inactive data to store) but the concept is the same, as are the benefits. At the moment the best $/GB is still obtained by Hard Disks (or tapes) but with the rate of advancement in Flash manufacturing, before you know it we’ll be seeing the large SSDs replacing disks in these systems too.

The Next Step

Traditional workloads are served well by this type of two-tier storage infrastructure but it’s not always enough.

The concept of memory-class storage is surfacing more and more often in conversations with end users, and also other CPU-driven techniques are taking the stage. Once again, the problem is getting results faster, before others if you want to improve your competitiveness.

Severs are Storage

Software-defined scale-out storage usually means commodity X86 servers, for HCI is the same and very low latency solutions are heading towards a similar approach. Proprietary hardware can’t compete, it’s too expensive and evolves too slowly compared to the rest of the infrastructure. Yes, niches good for proprietary systems will remain for a long time but this is not where the market is going.

Software is what makes the difference… everywhere now. Innovation and high performance at low cost, is what end users want. Solutions like Permabit do exactly that, making it possible to do more with less but also do much more, and quicker, with the same resources particularly when it is embedded in the storage or in the OS kernel!

Closing the circle

Storage requirements are continuing to diversify and “one-size-fits-all” no longer works (I’ve been saying that for a long time now). Fortunately, commodity x86 servers, flash memory and software are helping to build tailored solutions for everyone at reasonable costs, making high performance infrastructures accessible to a vaster public.

Most modern solutions are built out of servers. Storage, as we traditionally know it, is becoming less of a discrete component and more blended with the rest of the distributed infrastructures with software acting as the glue and making things happen. Examples can be found everywhere – large object storage systems have started implementing “serverless” or analytics features for massive data sets, while CPU intensive and real-time applications can leverage CPU-data vicinity and internal parallelism through a storage layer which can be ephemeral at times… but screaming fast!

 

 

Read more

curata__fAxJs4Iuwd4elLr.png

Why the operating system matters even more in 2017

| Homepage
curata__fAxJs4Iuwd4elLr.png

Operating systems don’t quite date back to the beginning of computing, but they go back far enough. Mainframe customers wrote the first ones in the late 1950s, with operating systems that we’d more clearly recognize as such today—including OS/360 from IBM and Unix from Bell Labs—following over the next couple of decades.

An operating system performs a wide variety of useful functions in a system, but it’s helpful to think of those as falling into three general categories.

First, the operating system sits on top of a physical system and talks to the hardware. This insulates application software from many hardware implementation details. Among other benefits, this provides more freedom to innovate in hardware because it’s the operating system that shoulders most of the burden of supporting new processors and other aspects of the server design—not the application developer. Arguably, hardware innovation will become even more important as machine learning and other key software trends can no longer depend on CMOS process scaling for reliable year-over-year performance increases. With the increasingly widespread adoption of hybrid cloud architectures, the portability provided by this abstraction layer is only becoming more important.

Second, the operating system—specifically the kernel—performs common tasks that applications require. It manages process scheduling, power management, root access permissions, memory allocation, and all the other low-level housekeeping and operational details needed to keep a system running efficiently and securely.

Finally, the operating system serves as the interface to both its own “userland” programs—think system utilities such as logging, performance profiling, and so forth—and applications that a user has written. The operating system should provide a consistent interface for apps through APIs (application programming interface) based on open standards. Furthermore, commercially supported operating systems also bring with them business and technical relationships with third-party application providers, as well as content channels to add other trusted content to the platform.

The computing technology landscape has changed considerably over the past couple of years. This has had the effect of shifting how we think about operating systems and what they do, even as they remain as central as ever. Consider changes in how applications are packaged, the rapid growth of computing infrastructures, and the threat and vulnerability landscape.

Containerization

Applications running in Linux containers are isolated within a single copy of the operating system running on a physical server. This approach stands in contrast to hypervisor-based virtualization in which each application is bound to a complete copy of a guest operating system and communicates with the hardware through the intervening hypervisor. In short, hypervisors virtualize the hardware resources, whereas containers virtualize the operating system resources. As a result, containers consume few system resources, such as memory, and impose essentially no performance overhead on the application.

Scale

Another significant shift is that we increasingly think in terms of computing resources at the scale point of the datacenter rather than the individual server. This transition has been going on since the early days of the web, of course. However, today we’re seeing the reimagining of high-performance computing “grid” technologies both for traditional batch workloads as well as for newer services-oriented styles.

Dovetailing neatly with containers, applications based on loosely coupled “microservices” (running in containers)—with or without persistent storage—are becoming a popular cloud-native approach. This approach, although reminiscent of Service Oriented Architecture (SOA), has demonstrated a more practical and open way to build composite applications. Microservices, through a fine-grained, loosely coupled architecture, allows for an application architecture to reflect the needs of a single well-defined application function. Rapid updates, scalability, and fault tolerance, can all be individually addressed in a composite application, whereas in traditional monolithic apps it’s much more difficult to keep changes to one component from having unintended effects elsewhere.

Security

All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation in a containerized and software-defined infrastructure world than in the case in which dedicated hardware or other software may be handling some of those tasks. Linux has been the beneficiary of a comprehensive toolbox of security-enforcing functionality built using the open source model, including SELinux for mandatory access controls, a wide range of userspace and kernel-hardening features, identity management and access control, and encryption.

Some things change, some don’t

Priorities associated with operating system development and operation have certainly shifted. The focus today is far more about automating deployments at scale than it is about customizing, tuning, and optimizing single servers. At the same time, there’s an increase in both the pace and pervasiveness of threats to a no longer clearly-defined security perimeter—requiring a systematic understanding of the risks and how to mitigate breaches quickly.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, much more robust, and much more lightweight. Their placement, provisioning, and securing must become more automated. But they still need to run on something. Something solid. Something open. Something that’s capable of evolving for new requirements and new types of workloads. And that something is a (Linux) operating system.

Read more

curata__0lq7FVrFBRdgESp.jpeg

Permabit Hits New Milestone in 2016 by Delivering the First Complete Data Reduction for Linux

| PR Newswire
curata__0lq7FVrFBRdgESp.jpeg

Permabit Technology Corporation, the data reduction experts, brought complete storage efficiency to Linux in 2016.  The company’s Virtual Data Optimizer (VDO) software now delivers advanced deduplication, HIOPS Compression® and fine-grained thin provisioning directly to data centers as they struggle to address the storage density challenges driven by the growth of Big Data and widespread cloud adoption. VDO’s modular, operating system-centric approach addresses the widest possible range of use cases from OLTP to backup workloads, in single system or distributed environments.

Chronologically in 2016, Permabit announced:

  • Availability of VDO for Data Centers on Red Hat Enterprise Linux
  • A partnership with LINBIT delivering bandwidth optimized offsite replication
  • Support for VDO for Ubuntu Linux from Canonical
  • A partnership with AHA Products Group to support development of advanced data compression solutions for Linux
  • Permabit partnership with Red Hat and qualification of VDO with Red Hat Ceph Storage and Red Hat Gluster Storage
  • A reseller agreement with Permabit Integration Partner CalSoft Pvt. Ltd.

“General purpose data reduction has vast potential to control data center costs in the Linux ecosystem,” noted Lynn Renshaw of Peerless Business Intelligence. “In 2016, a single square foot of high density data center space costs up to $3,000 to build and consumes up to 11.4kW hours of power per year at an average cost of $1,357 per square foot in the USA.  Roughly 1/3rd of all data center space is consumed by storage. Left unchecked, IDC estimates worldwide data center space will total 1.94 billion square feet by 2020.  Data reduction (such as Permabit VDO) can reduce data center storage footprints as much as 75%.  This technology alone could save $1.5 Trillion dollars in future data center build out costs.”

Permabit generated great excitement in 2016 with VDO’s ability to lower the TCO of software defined storage (SDS) solutions.  After witnessing the success of public cloud open source storage, many private data center operations teams began implementing software-based storage in 2016.  Following the public cloud providers, private data centers embraced the huge economic advantages of vendor neutrality, hardware independence, and increased utilization that comes from SDS, while still customizing for their own unique business requirements.  VDO’s modular, operating system-centric approach is deployed seamlessly which is why major private data centers wrapped up successful evaluations of VDO within SDS solutions.

According to Permabit CEO and President Tom Cook, “Dramatic changes in the storage industry over the past year have resulted in Permabit expanding from our traditional OEM-only business model, to one positioned to address today’s software-defined data center requirements.  As we looked at worldwide deployments of software defined infrastructure, we realized that Linux is at the center of innovation in the space.  Because of this, as the only complete Linux data reduction solution, VDO is uniquely positioned to radically alter storage economics, drastically reducing TCO.  With our expanded business model, immediate benefits can be realized across today’s Linux-based software-defined environments.”

Read more

curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Hyper-convergence meets private cloud platform requirements

| searchcloudstorage.techtarget.com
curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Infrastructure choice and integration are fundamental to capitalizing on all that a private cloud environment has to offer your organization. Enterprises looking to benefit from the cloud are often reluctant to deploy business-critical apps and data in the public cloud due to concerns about availability, security and performance. Most IT managers consider a private cloud platform a more comfortable choice, given the superior visibility into and control over IT infrastructure and peace of mind that comes from housing critical assets on the inside.

Application owners are often skeptical about whether a private cloud platform will really provide the increases in business agility promised by vendors, however. In a similar vein, they’re also wary about whether, and over what timeframe, they’ll realize the ROI required to make deploying a fully functional and expensive private cloud platform worthwhile. Meanwhile, most companies aren’t willing or able to build their own private cloud infrastructure due to a lack of skilled resources and the perceived risk involved. So they turn to vendors. Unfortunately, until recently, most vendor offerings provided some but not all the pieces and capabilities required to deploy a fully functional private cloud platform.

For example, basic open source software stacks deliver a private cloud framework that generally includes virtualization, compute, storage and networking components, along with security (identity management and so on), management and orchestration functionality. These layers are loosely integrated at best, however, which means the heavy lifting of integrating and testing components to make them work together is left to the customer (or third-party consultant). Similarly, most vendor-specific products have taken a mix-and-match approach, enabling customers to choose from among different modules or capabilities — again, necessitating integration on the back end.

Consequently, enterprises that want to avoid the large investment of time and money required to build or integrate private cloud stacks are now looking to adopt preintegrated products based on infrastructure platforms designed to support cloud-enabled apps and data. And, as our recent research reveals, these organizations prefer converged and hyper-converged infrastructures (HCIs) to traditional three-tier architectures to host their private cloud environments.

 

Read more