Deploying VDO Data Reduction on Red Hat Atomic Host

| By: (56)

All of the buzz about containers is a bit surprising to many people who’ve watched operating system technology evolve over the years.  After all, many of the core concepts behind running isolated applications on a shared OS has been around on UNIX for over 20 years.  So what’s so exciting?  Well, to understand the container revolution you first have to look at Virtual Machines (VMs) and their impact on the…

Read more

curata__WlvD3H3qEzyulwo.png

Data Center Optimization: How to Do More Without More Money

| Data Center Knowledge
curata__WlvD3H3qEzyulwo.png

Data centers are pushing the boundaries of the possible, using new paradigms to operate efficiently in an environment that continually demands more power, more storage, more compute capacity… more everything. Operating efficiently and effectively in the land of “more” without more money requires increased data center optimization at all levels, including hardware and software, and even policies and procedures.

Although cloud computing, virtualization and hosted data centers are popular, most organizations still have at least part of their compute capacity in-house. According to a 451 Research survey of 1,200 IT professionals, 83 percent of North American enterprises maintain their own data centers. Only 17 percent have moved all IT operations to the cloud, and 49 percent use a hybrid model that integrates cloud or colocation hosts into their data center operations.

The same study says most data center budgets have remained stable, although the heavily regulated healthcare and finance sectors are increasing funding throughout data center operations. Among enterprises with growing budgets, most are investing in upgrades or retrofits to enable data center optimization and to support increased density.

As server density increases and the data center footprint shrinks, any gains may be taken up by the additional air handling and power equipment, including uninterruptable power supplies and power generators. In fact, data center energy usage is expected to increase by 81 percent by 2020, according to CIO magazine.

Often, identifying and decommissioning unused servers during a data center optimization project is a challenge, along with right-sizing provisioning.

Virtualization makes it easy to spin up resources as needed, but it also makes tracking those resources harder. The result is that unused servers may be running because no one is certain they’re not being used. A study by the Natural Resources Defense Council and Anthesis reports that up to 30 percent of servers are unused, but still running.

A similar principle extends to storage. While data deduplication (removing duplicate files) is widely used, over-crowded storage remains an issue for small to medium-sized enterprises (SMEs). Deduplication can free much-needed storage space.  For example, data deduplication along with compression can shrink data storage consumption by up to 85%.  This not only addresses the budget issues mentioned above but also helps with data density much like the server density mentioned earlier.  Imagine that you can save money with less storage and increase your data density at the same time .  Looks lie a win-win!

If data center optimization is concerned with saving money, managers also should examine their purchasing programs. NaviSite looked for cost efficiencies within volume projects and looked at large commodity items like cabinets, racks, cabling and plug strips eliminated middlemen whenever possible. For big purchases go directly to the manufacturers in China and seek innovative young technology vendors working with them to design specifications that significantly lower the price.

Data center optimization, clearly, extends beyond hardware to become a system-wide activity. It is the key to providing more power, more capacity and more storage without requiring more money.

* This article is quite long you may want to read the source article which can be found by clicking on the link below:

 

Read more

curata__RFiwG7fGsTts5rQ.jpeg

Enterprise Storage Extensive Growth Opportunities by 2026

| openPR.com
curata__RFiwG7fGsTts5rQ.jpeg

With the increased focus on virtualization and cost of operations; simplicity and convergence; and the cloud, enterprises are moving from traditional enterprise storage system to software-defined storage and cloud storage to provide cost effective real-time storage services. Therefore, it has been observed that traditional Enterprise Storage Systems market has declined over the past few years.

Most of the enterprises are implementing cloud based storage systems due to low cost and greater agility and it also observed that there are companies which follow the hybrid cloud strategy where traditional and cloud storage are used together. This approach fuel the demand for traditional enterprise storage system and cloud storage system where critical workloads can be managed securely.

Enterprises are seeking for more efficient storage systems, as increasing focus on digitization creates huge amount of data which fuel the demand for innovative storage solutions. It has been observed that smaller enterprises drive the cloud storage market and large enterprises drive hybrid approach storage.

A significant tool in containing storage costs in any cloud or hybrid cloud is the application of data reduction technology which can easily deploy in any cloud deployment. Permabit VDO delivers data reduction up to 85% in public, private or hybrid cloud.

Due to the rise in the volume of structured and unstructured data and the need to backup and archive the files at reduced costs also propel the market growth for enterprise storage systems.

By offering a better price and reducing infrastructure and management costs and providing the enhanced security features enterprise storage systems market witness with the growth in future.
Enterprise storage systems market is segmented on the basis of type of storage and regions.

Read more

curata__JGRWVROnio08LCz.jpeg

Global Cloud Storage is grow at a CAGR of 25% by Forecast to 2023

| biotech.einnews.com
curata__JGRWVROnio08LCz.jpeg

In this rapidly changing world of technology, cloud storage market is gaining immense popularity owing to its ability to easily integrate with the already existing infrastructure of the enterprise. Cloud storage gateway solution provides features like encryption and data reduction technology with compression and data deduplication adds cost reduction and security to the data. It also allows rapid transfer of data to the cloud since the data is reduced and network traffic minimized.

As compared to other regions, the cloud storage market in North America is expected to witness a significantly healthy growth is accounted for the highest market share throughout the forecast period. U.S and Canada are anticipated to drive the growth owing to the presence of large number of established players of cloud storage solutions in the cloud storage market. In addition to this, the region also has a well-established infrastructure and higher internet penetration. Moreover, increasing adoption of cloud storage by small and medium enterprises is expected to be a major factor for the growth of cloud storage market.

The Cloud Storage Market is growing rapidly over 25% of CAGR and is expected to reach at approx. USD 104 billion by the end of forecast period.

Read more

curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Application consistency for enterprise multi-cloud and data reduction

| Networking information, news and tips
curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

 

The cloud era has arrived in a big way as businesses of all sizes are looking to increase their level of IT agility. But when it comes to cloud, one size certainly does not fit all. Businesses have a wide range of options, including the use of a private cloud and a wide range of public cloud providers. My research finds that 82% of businesses will operate a hybrid cloud environment in the next five years.

This is consistent with the findings of F5’s State of Application Delivery in 2017 report, which the company released the results of last week at its annual EMEA Agility Conference in Barcelona. The study polled approximately 2,220 customers across the globe about their plans for the cloud and the challenges they face. Some interesting statistics from the survey that will affect a multi-cloud strategy are as follows:

  • 80% are committed to multi-cloud architectures
  • 20% will have more than half of their applications running in a public and/or private cloud this year
  • 34% of organizations lack the skills necessary to secure the cloud
  • 23% lack other skills specific to cloud
  • Organizations will deploy an average of 14 application services necessary to optimize and secure cloud services, with the top five being network firewall, antivirus SSL VPN, load balancing and spam mitigation

In addition to the above challenges, the cost of multi-cloud deployment when using it for data replication can become excessive as can the network costs. The addition of data reduction technology to the IT stack can mitigate these costs by as much as 85%. See Data Reduction Reduces the Cost of Cloud Deployment for more specifics.

Read more

curata__kNl6y8gsttGHArN.png

Federal Agencies Optimize Data Centers by Focusing on Storage using Data Reduction

| fedtechmagazine.com
curata__kNl6y8gsttGHArN.png

In data centers, like any piece of real estate, every square foot matters.

“Any way we can consolidate, save space and save electricity, it’s a plus,” says the State Department’s Mark Benjapathmongkol, a division chief of the agency’s Enterprise Server Operation Centers.

In searching out those advantages, the State Department has begun investing in solid-state drives (SSDs), which provide improved performance while occupying substantially less space in data centers.

In one case, IT leaders replaced a disk storage system with SSDs and gained almost three racks worth of space, Benjapathmongkol says. Because SSDs are smaller and denser than hard disk drives (HDDs), IT staff don’t need to deploy extra hardware to meet speed requirements, resulting in massive space and energy savings.

Options for Simplifying Storage Management

Agencies can choose from multiple technology options to more effectively and efficiently manage their storage, says Greg Schulz, founder of independent analyst firm Server StorageIO. These options include: SSDs and cloud storage; storage features such as deduplication and compression, which eliminate redundancies and store data using less storage; and thin provisioning, which better utilizes available space, Schulz says.

Consider the Defense Information Systems Agency. During the past year, the combat support agency has modernized its storage environment by investing in SSDs. Across DISA’s nine data centers, about 80 percent of information is stored on SSD arrays and 20 percent is running on HDDs, says Ryan Ashley, DISA’s chief of storage.

SSDs have allowed the agency to replace every four 42U racks with a single 42U rack, resulting in 75 percent savings in floor space as well as reduced power and cooling costs, he says.

Deduplication Creates Efficiencies

Besides space savings and the fact that SSDs are faster than HDDs, SSDs bring additional storage efficiencies. This includes new management software that automates tasks, such as the provisioning of storage when new servers and applications are installed, Ashley says.

The management software also allows DISA to centrally manage storage across every data center. In the past, the agency used between four to eight instances of management software in individual data centers.

“It streamlines and simplifies management,” Ashley says. Automatic provisioning reduces human error and ensures the agency follows best practices, while central management eliminates the need for the storage team to switch from tool to tool, he says.

DISA also has deployed deduplication techniques to eliminate storing redundant copies of data. IT leaders recently upgraded the agency’s backup technology from a tape system to a disk-based virtual tape library. This type of approach can accelerate backup and recovery and reduce the amount of hardware needed for storage.

It also can lead to significant savings because DISA keeps backups for several weeks, meaning it often owns multiple copies of the same data. But thanks to deduplication efforts, the agency can store more than 140 petabytes of backup data with 14PB of hardware.

“It was a huge amount of floor space that we opened up by removing thousands of tapes,” says Jonathan Kuharske, DISA’s deputy of computing ecosystem.

Categorize Data to Go Cloud First

To comply with the government’s “Cloud First” edict, USAID began migrating to cloud services, including infrastructure and software services, about seven years ago.

Previously, USAID managed its own data centers and tiered its storage. But the agency moved its data to cloud storage three years ago, Gowen says, allowing USAID to provide reliable, cost-effective IT services to its 12,000 employees across the world. The agency, which declined to offer specific return on investment data, currently uses a dozen cloud providers.

“We carefully categorize our data and find service providers that can meet those categories,” says Gowen, noting categories include availability and security. “They just take care of things at an affordable cost.”

For its public-facing websites, the agency uses a cloud provider that has a content distribution network and can scale to handle sudden spikes in traffic.

In late 2013, a typhoon lashed the Philippines, killing at least 10,000 people. In the days following the disaster, President Obama announced USAID sent supplies including food and emergency shelter. Because the president mentioned USAID, about 40 million people visited the agency’s website. If USAID had hosted its own site, it would have crashed. But the cloud service provider handled the traffic, Gowen says.

Our service provider can scale instantaneously to 40 million users, and when visitors drop off, we scale back,” he says. “It’s all handled.”

 

Such transitions are becoming commonplace. Improving storage management is a pillar of the government’s effort to optimize data centers. To meet requirements from the Federal Information Technology Acquisition Reform Act (FITARA), the Data Center Optimization Initiative requires agencies transition to cost-effective infrastructure.

While agencies are following different paths, the result is nearly identical: simpler and more efficient storage management, consolidation, increased reliability, improved service and cost savings. The U.S. Agency for International Development, for example, has committed to cloud storage.

“Our customers have different needs. The cloud allows us to focus on categorizing our data based on those needs like fast response times, reliability, availability and security,” says Lon Gowen, USAID’s chief strategist and special advisor to the CIO. “We find the service providers that meet those category requirements, and then we let the service providers focus on the details of the technology.”

To read the complete article click on the link below;

 

Read more

curata__iqhEaW8lW3S1SC7.png

Cloud Economics Drive the IT Infrastructure of Tomorrow

| CloudPost
curata__iqhEaW8lW3S1SC7.png

The cloud continues to dominate IT as businesses make their infrastructure decisions based on cost and agility. Public cloud, where shared infrastructure is paid for and utilized only when needed, is the most popular model today. However, more and more organizations are addressing security concerns by creating their own private clouds. As businesses deploy private cloud infrastructure, they are adopting techniques used in the public cloud to control costs. Gone are the traditional arrays and network switches of the past, replaced with software-defined data centers running on industry standard servers.

Features which improve efficiency make the cloud model more effective by reducing costs and increasing data transfer speeds. One such feature which is particularly effective in cloud environments is inline data reduction. This is a technology that can be used to lower the costs of data in transit and at rest. In fact, data reduction delivers unique benefits to each model of cloud deployment.

For the entire article please click on the link below;

Read more

curata__72Bb2tTnqExjWmD.jpeg

Using Data Reduction at the OS layer in Enterprise Linux Environments

| Stock Market
curata__72Bb2tTnqExjWmD.jpeg

Enterprises and cloud service providers that have built their infrastructure around Linux should deploy data reduction in the operating system to drive costs down, say experts at Permabit Technology Corporation, the company behind Permabit Virtual Data Optimizer (VDO).  Permabit VDO is the only complete data reduction software for Linux, the world’s most popular server Operating System (OS). Permabit’s VDO software fills a gap in the Linux feature set by providing a cost effective, alternative to the data reduction services delivered as part of the two other major OS platforms – Microsoft Windows and VMware. IT architects are driven to cut costs as they build out their next generation infrastructure with one or more of these OS platforms in  public and/or private cloud deployments and one obvious way to do so is with data reduction.

When employed as a component of the OS, data reduction can be applied universally without lock-in of proprietary solutions. Adding compression, deduplication, and thin provisioning to the core OS, data reduction benefits can be leveraged by any application or infrastructure services running on that OS. This ensures that savings accrue across the entire IT infrastructure, delivering TCO advantages no matter where the data resides. This is the future of data reduction – as a ubiquitous service of the OS.

“We’re seeing movement away from proprietary storage solutions, where data reduction was a key differentiated feature, toward OS-based capabilities that are applied across an entire infrastructure,” said Tom Cook, Permabit CEO.  “Early adopters are reaping financial rewards through reduced cost of equipment, space, power and cooling. Today we are also seeing adoption of data reduction in the OS by more conservative IT organizations who are driven to take on more initiatives with tightly constrained IT budgets.”

VDO, with inline data deduplication, HIOPS Compression®, and fine-grained thin provisioning, is deployed as a device-mapper driver for Linux. This approach ensures compatibility with a full complement of direct-attached/ephemeral, block, file and object interfaces. VDO data reduction is available for Red Hat Enterprise Linux and Canonical Ubuntu Linux LTS distributions.

Advantages of in-OS data reduction technology include:

  • Improved density for public/private /hybrid cloud storage, resulting in lower storage and service costs
  • Vendor independent to function across hardware running the target OS
  • Seamless data mobility between on-premise and cloud resources
  • Up to six times lower IT infrastructure OpEx
  • Transparent to end users accessing data
  • Requires no modifications to existing applications, file systems, virtualization features, or data protection capabilities

With VDO, these advantages are being realized on Linux today. VDO deployments have been completed (or are currently in progress) with large telecommunications companies, government agencies, financial services firms and IaaS providers who have standardized on Linux for their data centers. With data reduction in Linux, enterprises achieve vendor independence across all Linux based storage, increased mobility of reduced data and hyper scale economics. What an unbeatable combination!

Read more

curata__zqLw2OK4KZLgvSB.png

Busting the handcuffs of traditional data storage

| SiliconANGLE
curata__zqLw2OK4KZLgvSB.png

Premise

The largest and most successful Web companies in the world have proven a new model for managing and scaling a combined architecture of compute and storage. If you’ve heard it once, you’ve heard it a hundred times: “The hyperscale guys don’t use traditional disk arrays.”

Giants such as Facebook Inc. and Google Inc. use a design of local distributed storage to solve massive data problems. The key differentiation of this new architecture is extreme scalability and simplicity of management, enabled by automation. Over the years, Wikibon has referred to this approach as “Software-led Infrastructure,” which is analogous to so-called Software-Defined Storage.

Excluding the most mission-critical online transaction processing markets served by the likes of Oracle Corp. and IBM Corp.’s DB2, it’s becoming clear this software-led approach is poised to penetrate mainstream enterprises because it is more cost-effective and agile than traditional infrastructure. Up until recently, however, such systems have lacked the inherent capabilities needed to service core enterprise apps.

This dynamic is changing rapidly. In particular, Microsoft Corp. with Azure Stack and VMware Inc. with its vSAN architecture are demonstrating momentum with tightly integrated and automated storage services. Linux, with its open source ecosystem, is the remaining contender to challenge VMware and Microsoft for mainstream adoption of on-premises and hybrid information technology infrastructure, including data storage.

Upending the ‘iron triangle’ of arrays

Peter Burris, Wikibon’s head of research, recently conducted research that found IT organizations suffer from an infrastructure “iron triangle” that is constraining IT progress. According to Burris, the triangle comprises entrenched IT administrative functions, legacy vendors and technology-led process automation.

In his research, Burris identified three factors IT organizations must consider to break the triangle:

  • Move from a technology to a service administration model;
  • Adopt True Private Cloud to enhance real automation and protect intellectual property that doesn’t belong in the cloud; and
  • Elevate vendors that don’t force false “platform” decisions, meaning technology vendors have a long history of  “adding value” by renaming and repositioning legacy products under vogue technology marketing umbrellas.

The storage industry suffers from entrenched behaviors as much as any other market segment. Traditional array vendors are trying to leverage the iron triangle to slow the decline of legacy businesses while at the same time ramping up investments in newer technologies, both organically and through acquisition. The Linux ecosystem –the lone force that slowed down Microsoft in the 1990s – continues to challenge these entrenched IT norms and is positioned for continued growth in the enterprise.

But there are headwinds.

In a recent research note published on Wikibon (login required), analyst David Floyer argued there are two main factors contributing to the inertia of traditional storage arrays:

  • The lack of equivalent functionality for storage services in this new software-led world; and
  • The cost of migration of existing enterprise storage arrays – aka the iron triangle.

Linux, Floyer argues, is now ready to grab its fair share of mainstream, on-premises enterprise adoption directly as a result of newer, integrated functionality that is hitting the market. As these software-led models emerge in an attempt to replicate cloud, they inevitably will disrupt traditional approaches just as the public cloud has challenged the dominant networked storage models such as Storage Area Network and Network-Attached Storage that have led the industry for two decades.

Linux is becoming increasingly competitive in this race because it is allowing practitioners to follow the game plan Burris laid out in his research, namely:

1) Building momentum on a services model – (i.e. delivering robust enterprise storage management services that are integrated into the OS);

2) Enabling these services to be invoked by an orchestration/automation framework (e.g., OpenStack, OpenShift) or directly by an application leveraging microservices (i.e., True Private Cloud); and

3) The vendors delivering these capabilities have adopted an open ecosystem approach (i.e. they’re not forcing false platform decisions, rather they’re innovating and integrating into an existing open platform). A scan of the OpenStack Web site gives a glimpse of some of the customers attempting to leverage this approach.

Floyer’s research explores some of the key services required by Linux to challenge for market leadership, with a deeper look at the importance of data reduction as a driver of efficiency and cost reduction for IT organizations.

Types of services

In his research, Floyer cited six classes of storage service that enterprise buyers have expected, which have traditionally been available only within standalone arrays. He posited that these services are changing rapidly, some with the introduction of replacement technologies and others that will increasingly be integrated into the Linux operating system, which will speed adoption. A summary of Floyer’s list of storage services follows:

  • Cache management to overcome slow hard disk drives which are being replaced by flash (with data reduction techniques) to improve performance and facilitate better data sharing
  • Snapshot Management for improved recovery
  • Storage-level Replication is changing due to the effects of flash and high speed interconnects such as 40Gb or 100Gb links. Floyer cited WANdisco’s Paxos technology and the Simplivity (acquired by HPE) advanced file system as technologies supporting this transformation.
  • Encryption, which has traditionally been confined to disk drives, overhead-intensive and leaves data in motion exposed. Encryption has been a fundamental capability within the Linux stack for years and ideally all data would be encrypted. However encryption overheads have historically been too cumbersome. With the advent of graphics processing units and field-programmable gate arrays from firms such as Nvidia Corp., encryption overheads are minimized enabling end-to-end encryption, with the application and database as the focal point for both encryption and decryption, not the disk drive.
  • Quality of Service, which is available in virtually all Linux arrays but typically only sets a floor under which performance may not dip. Traditional approaches for QoS lack granularity to set ceilings (for example) and allow bursting programmatically through a complete and well-defined REST API (to better service the needs of individual applications – versus a one-size-fits all approach). NetApp Inc.’s Solidfire has, from its early days, differentiated in this manner and is a good example of a true software-defined approach that allows provisioning both capacity and performance dynamically through software. Capabilities like this are important to automate the provisioning and management of storage services at scale, a key criterion to replicate public cloud on-prem.
  • Data Reduction – Floyer points out in his research that there are four areas of data reduction that practitioners should understand, including zero suppression, thin provisioning, compression and data de-duplication. Data sharing is a fifth and more nuanced capability that will become important in the future. According to Floyer:

To date… “The most significant shortfall in the Linux stack has been the lack of an integrated data reduction capability, including zero suppression, thin provisioning, de-duplication and compression.”

According to Floyer, “This void has been filled by the recent support of Permabit’s VDO data reduction stack (which includes all the data reduction components) by Red Hat.”

VDO stands for Virtual Data Optimizer. In a recent conversation with Wikibon, Permabit Chief Executive Tom Cook explained that as a Red Hat Technology partner, Permabit obtains early access to Red Hat software, which allows VDO testing and deep integration into the operating system, underscoring Floyer’s argument.

Why is this relevant? The answer is cost.

The cost challenge

Data reduction is a wonky topic to chief information officers, but the reason it’s so important is that despite the falling cost per bit, storage remains a huge expense for buyers, often accounting for between 15 and 50 percent of IT infrastructure capital expenditures. As organizations build open hybrid cloud architectures and attempt to compete with public cloud offerings, Linux storage must not only be functionally robust, it must keep getting dramatically cheaper.

The storage growth curve, which for decades has marched to the cadence of Moore’s Law, is re-shaping and growing at exponential rates. IoT, M2M communications and 5G will only serve to accelerate this trend.

Data reduction services have been a huge tailwind for more expensive flash devices and are fundamental to reducing costs going forward. Traditionally, the common way Linux customers have achieved efficiencies is to acquire data reduction services (e.g., compression and de-dupe) through an array – which may help lower the cost of the array, but it perpetuates the Iron triangle. And longer-term, it hurts the overall cost model.

As underscored in Floyer’s research, the modern approach is to access sets of services that are integrated into the OS and delivered via Linux within an orchestration/automation framework that can manage the workflow. Some cloud service providers (outside of the hyperscale crowd) are sophisticated and have leveraged open-source services to achieve hyperscalelike benefits. Increasingly, these capabilities are coming to established enterprises via the Linux ecosystem and are achieving tighter integration as discussed earlier.

More work to be done

Wikibon community data center practitioners typically cite three primary areas that observers should watch as indicators of Linux maturity generally and software-defined storage specifically:

  1. The importance of orchestration and automation

To truly leverage these services, a management framework is necessary to understand what services have been invoked, to ensure recovery is in place (if needed) and give confidence that software-defined storage and associated services can deliver consistently in a production environment.

Take encryption as an example along with data reduction. To encrypt you must reduce the data before you encrypt because encryption tries to eliminate the very patterns that, for example, data de-duplication is trying to find. This example illustrates the benefits of integrated services. Specifically, if something goes wrong during the process, the system must have deep knowledge of exactly what happened and how to recover. The ideal solution in this example is to have encryption, de-dupe and compression integrated as a set of services embedded in the OS and invoked programmatically by the application where needed and where appropriate.

2. Application performance

Wikibon believes that replicating hyperscalerlike models on-prem will increasingly require integrating data management features into the OS. Technologists in the Wikibon community indicate that the really high performance workloads will move to a software-led environment leveraging emerging non-volatile memory technologies such as NVMe and NVMf. Many believe the highest performance workloads will go into these emerging systems and over time, eliminate what some call the “horrible storage stack” – meaning the overly cumbersome storage protocols that have been forged into the iron triangle for years. This will take time, but the business value effects could be overwhelming with game-changing performance and low latencies as disruptive to storage as high frequency trading has been to Wall Street — ideally without the downside.

3. Organizational issues

As Global 2000 organizations adopt this new software-led approach, there are non-technology-related issues that must be overcome. “People, process and technology” is a bit of a bromide, but we hear it all the time: “Technology is the easy part…. People and process are the difficult ones.” The storage iron triangle will not be easily disassembled. The question remains: Will the economics of open source and business model integrations such as those discussed here overwhelm entrenched processes and the people who own them?

On the surface, open source services are the most likely candidates to replicate hyperscale environments because of the collective pace of innovation and economic advantages. However, to date, a company such as VMware has demonstrated that it can deliver more robust enterprise services faster than the open-source alternatives — but not at hyper-scale.

History is on the side of open source. If the ecosystem can deliver on its cost, scalability and functionality promises, it’s a good bet that the tech gap will close rapidly and economic momentum will follow. Process change and people skills will likely be more challenging.

(Disclosure: Wikibon is a division of SiliconANGLE Media Inc., the publisher of Siliconangle.com. Many of the companies referenced in this post are clients of Wikibon. Please read my Ethics Statement.)

 

Read more