curata__zqLw2OK4KZLgvSB.png

Busting the handcuffs of traditional data storage

| SiliconANGLE
curata__zqLw2OK4KZLgvSB.png

Premise

The largest and most successful Web companies in the world have proven a new model for managing and scaling a combined architecture of compute and storage. If you’ve heard it once, you’ve heard it a hundred times: “The hyperscale guys don’t use traditional disk arrays.”

Giants such as Facebook Inc. and Google Inc. use a design of local distributed storage to solve massive data problems. The key differentiation of this new architecture is extreme scalability and simplicity of management, enabled by automation. Over the years, Wikibon has referred to this approach as “Software-led Infrastructure,” which is analogous to so-called Software-Defined Storage.

Excluding the most mission-critical online transaction processing markets served by the likes of Oracle Corp. and IBM Corp.’s DB2, it’s becoming clear this software-led approach is poised to penetrate mainstream enterprises because it is more cost-effective and agile than traditional infrastructure. Up until recently, however, such systems have lacked the inherent capabilities needed to service core enterprise apps.

This dynamic is changing rapidly. In particular, Microsoft Corp. with Azure Stack and VMware Inc. with its vSAN architecture are demonstrating momentum with tightly integrated and automated storage services. Linux, with its open source ecosystem, is the remaining contender to challenge VMware and Microsoft for mainstream adoption of on-premises and hybrid information technology infrastructure, including data storage.

Upending the ‘iron triangle’ of arrays

Peter Burris, Wikibon’s head of research, recently conducted research that found IT organizations suffer from an infrastructure “iron triangle” that is constraining IT progress. According to Burris, the triangle comprises entrenched IT administrative functions, legacy vendors and technology-led process automation.

In his research, Burris identified three factors IT organizations must consider to break the triangle:

  • Move from a technology to a service administration model;
  • Adopt True Private Cloud to enhance real automation and protect intellectual property that doesn’t belong in the cloud; and
  • Elevate vendors that don’t force false “platform” decisions, meaning technology vendors have a long history of  “adding value” by renaming and repositioning legacy products under vogue technology marketing umbrellas.

The storage industry suffers from entrenched behaviors as much as any other market segment. Traditional array vendors are trying to leverage the iron triangle to slow the decline of legacy businesses while at the same time ramping up investments in newer technologies, both organically and through acquisition. The Linux ecosystem –the lone force that slowed down Microsoft in the 1990s – continues to challenge these entrenched IT norms and is positioned for continued growth in the enterprise.

But there are headwinds.

In a recent research note published on Wikibon (login required), analyst David Floyer argued there are two main factors contributing to the inertia of traditional storage arrays:

  • The lack of equivalent functionality for storage services in this new software-led world; and
  • The cost of migration of existing enterprise storage arrays – aka the iron triangle.

Linux, Floyer argues, is now ready to grab its fair share of mainstream, on-premises enterprise adoption directly as a result of newer, integrated functionality that is hitting the market. As these software-led models emerge in an attempt to replicate cloud, they inevitably will disrupt traditional approaches just as the public cloud has challenged the dominant networked storage models such as Storage Area Network and Network-Attached Storage that have led the industry for two decades.

Linux is becoming increasingly competitive in this race because it is allowing practitioners to follow the game plan Burris laid out in his research, namely:

1) Building momentum on a services model – (i.e. delivering robust enterprise storage management services that are integrated into the OS);

2) Enabling these services to be invoked by an orchestration/automation framework (e.g., OpenStack, OpenShift) or directly by an application leveraging microservices (i.e., True Private Cloud); and

3) The vendors delivering these capabilities have adopted an open ecosystem approach (i.e. they’re not forcing false platform decisions, rather they’re innovating and integrating into an existing open platform). A scan of the OpenStack Web site gives a glimpse of some of the customers attempting to leverage this approach.

Floyer’s research explores some of the key services required by Linux to challenge for market leadership, with a deeper look at the importance of data reduction as a driver of efficiency and cost reduction for IT organizations.

Types of services

In his research, Floyer cited six classes of storage service that enterprise buyers have expected, which have traditionally been available only within standalone arrays. He posited that these services are changing rapidly, some with the introduction of replacement technologies and others that will increasingly be integrated into the Linux operating system, which will speed adoption. A summary of Floyer’s list of storage services follows:

  • Cache management to overcome slow hard disk drives which are being replaced by flash (with data reduction techniques) to improve performance and facilitate better data sharing
  • Snapshot Management for improved recovery
  • Storage-level Replication is changing due to the effects of flash and high speed interconnects such as 40Gb or 100Gb links. Floyer cited WANdisco’s Paxos technology and the Simplivity (acquired by HPE) advanced file system as technologies supporting this transformation.
  • Encryption, which has traditionally been confined to disk drives, overhead-intensive and leaves data in motion exposed. Encryption has been a fundamental capability within the Linux stack for years and ideally all data would be encrypted. However encryption overheads have historically been too cumbersome. With the advent of graphics processing units and field-programmable gate arrays from firms such as Nvidia Corp., encryption overheads are minimized enabling end-to-end encryption, with the application and database as the focal point for both encryption and decryption, not the disk drive.
  • Quality of Service, which is available in virtually all Linux arrays but typically only sets a floor under which performance may not dip. Traditional approaches for QoS lack granularity to set ceilings (for example) and allow bursting programmatically through a complete and well-defined REST API (to better service the needs of individual applications – versus a one-size-fits all approach). NetApp Inc.’s Solidfire has, from its early days, differentiated in this manner and is a good example of a true software-defined approach that allows provisioning both capacity and performance dynamically through software. Capabilities like this are important to automate the provisioning and management of storage services at scale, a key criterion to replicate public cloud on-prem.
  • Data Reduction – Floyer points out in his research that there are four areas of data reduction that practitioners should understand, including zero suppression, thin provisioning, compression and data de-duplication. Data sharing is a fifth and more nuanced capability that will become important in the future. According to Floyer:

To date… “The most significant shortfall in the Linux stack has been the lack of an integrated data reduction capability, including zero suppression, thin provisioning, de-duplication and compression.”

According to Floyer, “This void has been filled by the recent support of Permabit’s VDO data reduction stack (which includes all the data reduction components) by Red Hat.”

VDO stands for Virtual Data Optimizer. In a recent conversation with Wikibon, Permabit Chief Executive Tom Cook explained that as a Red Hat Technology partner, Permabit obtains early access to Red Hat software, which allows VDO testing and deep integration into the operating system, underscoring Floyer’s argument.

Why is this relevant? The answer is cost.

The cost challenge

Data reduction is a wonky topic to chief information officers, but the reason it’s so important is that despite the falling cost per bit, storage remains a huge expense for buyers, often accounting for between 15 and 50 percent of IT infrastructure capital expenditures. As organizations build open hybrid cloud architectures and attempt to compete with public cloud offerings, Linux storage must not only be functionally robust, it must keep getting dramatically cheaper.

The storage growth curve, which for decades has marched to the cadence of Moore’s Law, is re-shaping and growing at exponential rates. IoT, M2M communications and 5G will only serve to accelerate this trend.

Data reduction services have been a huge tailwind for more expensive flash devices and are fundamental to reducing costs going forward. Traditionally, the common way Linux customers have achieved efficiencies is to acquire data reduction services (e.g., compression and de-dupe) through an array – which may help lower the cost of the array, but it perpetuates the Iron triangle. And longer-term, it hurts the overall cost model.

As underscored in Floyer’s research, the modern approach is to access sets of services that are integrated into the OS and delivered via Linux within an orchestration/automation framework that can manage the workflow. Some cloud service providers (outside of the hyperscale crowd) are sophisticated and have leveraged open-source services to achieve hyperscalelike benefits. Increasingly, these capabilities are coming to established enterprises via the Linux ecosystem and are achieving tighter integration as discussed earlier.

More work to be done

Wikibon community data center practitioners typically cite three primary areas that observers should watch as indicators of Linux maturity generally and software-defined storage specifically:

  1. The importance of orchestration and automation

To truly leverage these services, a management framework is necessary to understand what services have been invoked, to ensure recovery is in place (if needed) and give confidence that software-defined storage and associated services can deliver consistently in a production environment.

Take encryption as an example along with data reduction. To encrypt you must reduce the data before you encrypt because encryption tries to eliminate the very patterns that, for example, data de-duplication is trying to find. This example illustrates the benefits of integrated services. Specifically, if something goes wrong during the process, the system must have deep knowledge of exactly what happened and how to recover. The ideal solution in this example is to have encryption, de-dupe and compression integrated as a set of services embedded in the OS and invoked programmatically by the application where needed and where appropriate.

2. Application performance

Wikibon believes that replicating hyperscalerlike models on-prem will increasingly require integrating data management features into the OS. Technologists in the Wikibon community indicate that the really high performance workloads will move to a software-led environment leveraging emerging non-volatile memory technologies such as NVMe and NVMf. Many believe the highest performance workloads will go into these emerging systems and over time, eliminate what some call the “horrible storage stack” – meaning the overly cumbersome storage protocols that have been forged into the iron triangle for years. This will take time, but the business value effects could be overwhelming with game-changing performance and low latencies as disruptive to storage as high frequency trading has been to Wall Street — ideally without the downside.

3. Organizational issues

As Global 2000 organizations adopt this new software-led approach, there are non-technology-related issues that must be overcome. “People, process and technology” is a bit of a bromide, but we hear it all the time: “Technology is the easy part…. People and process are the difficult ones.” The storage iron triangle will not be easily disassembled. The question remains: Will the economics of open source and business model integrations such as those discussed here overwhelm entrenched processes and the people who own them?

On the surface, open source services are the most likely candidates to replicate hyperscale environments because of the collective pace of innovation and economic advantages. However, to date, a company such as VMware has demonstrated that it can deliver more robust enterprise services faster than the open-source alternatives — but not at hyper-scale.

History is on the side of open source. If the ecosystem can deliver on its cost, scalability and functionality promises, it’s a good bet that the tech gap will close rapidly and economic momentum will follow. Process change and people skills will likely be more challenging.

(Disclosure: Wikibon is a division of SiliconANGLE Media Inc., the publisher of Siliconangle.com. Many of the companies referenced in this post are clients of Wikibon. Please read my Ethics Statement.)

 

Read more

mainframe

Enterprise storage in 2017: trends and challenges

| Information Age
mainframe

Information Age previews the storage landscape in 2017 – from the technologies that businesses will implement to the new challenges they will face.

The enthusiastic outsourcing to the cloud by enterprise CIOs in 2016 will start to tail off in 2017, as finance directors discover that the high costs are not viable long-term. Board-level management will try to reconcile the alluring simplicity they bought into against the lack of visibility into hardware and operations.

As enterprises attempt to solve the issue of maximising a return for using the cloud, many will realise that the arrangement they are in may not be suitable across the board and seek to bring some of their data back in-house.

It will sink in that using cloud for small data sets can work really well in the enterprise, but as soon as the volume of data grows to a sizeable amount, the outsourced model becomes extremely costly.

Enterprises will extract the most value from their IT infrastructures through hybrid cloud in 2017, keeping a large amount of data on-premise using private cloud and leveraging key aspects of public cloud for distribution, crunching numbers and cloud compute, for example.

‘The combined cost of managing all storage from people, software and full infrastructure is getting very expensive as retention rates on varying storage systems differ,’ says Matt Starr, CTO at Spectra Logic. ‘There is also the added pressure of legislation and compliance as more people want or need to keep everything forever.

‘We predict no significant uptick on storage spend in 2017, and certainly no drastic doubling of spend,’ says Starr. ‘You will see the transition from rotational to flash. Budgets aren’t keeping up with the rates that data is increasing.’

The prospect of a hybrid data centre will, however, trigger more investment eventually. The model is a more efficient capacity tier based on pure object storage at the drive level and above this a combination of high-performance HDD (hard disk drives) and SSD (solid state drives).

Hybrid technology has been used successfully in laptops and desktop computers for years, but it’s only just beginning to be considered for enterprise-scale data centres.

While the industry is in the very early stages of implementing this new method for enterprise, Fagan expects 70% of new data centres to be hybrid by 2020.

‘This is a trend that I expect to briskly pick up pace,’ he says. ‘As the need for faster and more efficient storage becomes more pressing, we must all look to make smart plans for the inevitable data.

One “must have” is data reduction technologies. By applying data reduction to the software stack data density, costs and efficiency will improve.  If Red Hat Linux is part of your strategy, deplpoying Permabit VDO data reduction is as easy as plug in and go. Reducing storage consumption, data center footprint and operating costs will drop by 50% or more.

 

Read more

curata__d1NS2937h9ezbqI.gif

Worldwide Enterprise Storage Market Sees Modest Decline in Third Quarter, According to IDC

| idc.com
curata__d1NS2937h9ezbqI.gif

Total worldwide enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in the third quarter of 2016 (3Q16), according to the International Data Corporation (IDC) Worldwide Quarterly Enterprise Storage Systems Tracker. Total capacity shipments were up 33.2% year over year to 44.3 exabytes during the quarter. Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion. Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

“The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, Storage Systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

WW Enterprise Storage Market Down 3% in 3Q16 From 3Q15

| storagenewsletter.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Total WW enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in 3Q16, according to the IDC Worldwide Quarterly Enterprise Storage Systems Tracker.

Total capacity shipments were up 33.2% year over year to 44.3 EBs during the quarter.

Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion.

Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, storage systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more

curata__Xt3CZt70fyn8phB.jpeg

Software-Defined Storage Market Projected to Reach 22.56 Billion USD by 2021

| Stock Market
curata__Xt3CZt70fyn8phB.jpeg

North America is expected to lead the Software-Defined Storage market as the governments in the region have initiated many projects related to digitalization of their countries, which is making the region the largest adopter of SDS solutions.[167 Pages Report] Software defined Storage Market categorizes the Global SDS Market by solutions as software-defined server, data security & compliance, controller, data management, and hypervisor, by services, by usage, by organization size, by application area & by geography.

According to report “Software-Defined Storage Market by Component [Platforms/Solutions (Software-Defined Server, Data Security & Compliance, Controller, Data Management, and Hypervisor), Services], Usage, Organization Size, Application Area – Global Forecast to 2021″, global market is expected to grow from USD 4.72 Billion in 2016 to USD 22.56 Billion by 2021, at a Compound Annual Growth Rate (CAGR) of 36.7%.

Exponentially growing data volume across enterprises, rise in “software defined” concept, and the need for cost optimization in data management are some of the major driving factors for the SDS market. Furthermore, avoiding downtime of storage infrastructure and competitive market environment due to its being an innovative technology are expected to provide opportunities for the growth of the SDS market.

Data security and compliance software is expected to be the largest contributor in the global SDS market during the forecast period

Organizations have to mandatory follow the compliance policies and guidelines for storing and sharing data while securing business-critical information. Also, there is a need to take actions for storing and sharing data while securing the business-critical information. The requirement of security and compliance function in the existing SDS solution while storing the data has increased the demand for this software and is expected to contribute the highest in the overall revenue generation for the SDS market during the forecast period.

The support and maintenance segment is expected to show significant growth rate during the forecast period

The demand for services is significantly increasing along with the growth of the SDS market. Software and maintenance services help organizations to get the maximum benefits from their SDS software investment. The customers can get better assistance and maintenance for their SDS solution with various levels of support programs. The market for support and maintenance will keep growing owing to the need for consistent support required for deploying and utilizing the SDS solution.

Additionally, we are seeing data reduction solutions added to SDS that enable them to become extremely efficient in data storage use while improving data density and optimizing data center footprint.

Read more

Next Generation Data Storage Technologies Market Forecast Up to 2024

| openPR.com

Next generation data storage technology includes technologically advanced data storage products and solutions to deal with increasing file sizes and huge amount of unstructured data. The next generation data storage technology manages large data securely and enables reliable, secure and fast recovery of data in a cost-efficient manner. It has enabled scalable storage and handling of large data generated by big enterprises.

The factors favoring the growth of the next generation storage technologies market include ubiquity of input and output devices in every sector and the ever-increasing need for managing, analyzing and storing huge amount of data. Consequently, the demand for next generation data technologies is expected to increase at a quick rate over the forecast period. This growth is expected to be backed by the growing demand for advanced time saving technologies including automated systems, smart technologies, online shopping, and internet of things etc. which require handling of large data generated by the enterprises.

There are various challenges restraining the growth of the next generation data storages technologies market. This includes technological complexity, repair and restore issues, lack of security etc. Furthermore, high level of data consistency is required in the data storage. Future growth in the market is projected to come from emerging need for data storage in small and medium enterprises.

The next generation data storage technologies market is segmented on the basis of technology and application. By technology, the market is classified as into all-flash storage arrays, hybrid array, cloud based disaster recovery, holographic data storage and heat assisted magnetic recording. Of these, hybrid array is a form of hierarchical storage management contains solid state drives and hard disk drives for input and output speed improvements. Holographic data storage is the high capacity data storage technology whereas hybrid array and all flash array are standard data storage techniques.

By application, next generation data storage technologies market is divided into the enterprise data storage, big data storage and cloud based storage.

North America is the dominating the next generation data storage technologies market. The Asian Pacific countries including China, Japan and India are expected to grow at a significant rate as compared to other regions. The presence of a large number of IT industries in the Asia Pacific region is one of the key factor driving growth of the next generation data storage technologies market in the region. Asia Pacific countries are speculated to make huge investments in the data storage sector to provide their existing infrastructures with new data storage technologies and solutions to improve the production process. Japan, which is one of the technology advanced nations, is anticipated to be a big market for next generation data storage technologies. The country is already using these data storage technology across its various industry verticals

Some of the key players in the next generation data technology market are Dell Inc., Avago Technologies, EMC Corporation, Hewlett-Packard Development Company, L.P., HGST, Inc., – Hitachi Data Systems, IBM Corporation, NetApp, Inc., Avago Technologies, Drobo, Inc. and Micron Technology Corporation.

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

WW Cloud IT Infrastructure Revenue Up 14.5% to $7.7 Billion in 2Q16 – IDC

| storagenewsletter.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

According to the International Data Corporation‘s Worldwide Quarterly Cloud IT Infrastructure Tracker, vendor revenue from sales of infrastructure products (server, storage, and Ethernet switch) for cloud IT, including public and private cloud, grew by 14.5% year over year to $7.7 billion in 2Q16, ahead of renewed hyperscale growth expected in 2H16.

The overall share of cloud IT infrastructure sales climbed to 34.9% in 2Q16, up from 30.6% a year ago. Revenue from infrastructure sales to private cloud grew by 14.0% to $3.1 billion, and to public cloud by 14.9% to $4.6 billion. In comparison, revenue in the traditional (non-cloud) IT infrastructure segment decreased 6.1% year over year in the second quarter. Private cloud infrastructure growth was led by Ethernet switch at 49.4% year-over-year growth, followed by storage at 19.7%, and server at 8.9%. Public cloud growth was also led by Ethernet switch at 61.8% year-over-year growth, followed by server at 25.1% while storage revenue for public cloud declined 6.2% year over year. In traditional IT deployments, server declined the most (7.5% year over year) with Ethernet switch and storage declining 2.2% and 2.0%, respectively.

As expected, the hyperscale slow down continued in the second quarter of 2016,” said Kuba Stolarski, research director for computing platforms, IDC. “However, deployments to mid-tier and small cloud service providers showed strong growth, along with private cloud buildouts. In general, the second quarter did not have as difficult a compare to the prior year as the first quarter did, and this helped improve growth results across the board compared to last quarter. In 2H16, IDC expects to see strengthening in public cloud growth as key hyperscalers bring new datacenters online around the globe, continued strength in private cloud deployments, and declines in traditional, non-cloud deployments.”

Read more

Why NetApp’s Stock Is Worth $28

| nasdaq.com

NetApp (NTAP) has seen weak demand for storage hardware over the last few years, corresponding to weak IT spend across the globe. NetApp’s storage product revenues have consistently fallen over the last four years, a trend observed by many large IT hardware, telecom hardware and storage hardware vendors. Competing storage systems manufacturers EMC (NYSE:EMC), Hewlett-Packard Enterprise (HPE), Hitachi Data Systems and IBM (IBM) have also witnessed low demand for storage hardware . As a result, storage systems manufacturers are shifting their focus to fast-growing market domains such as flash-based storage arrays or converged systems (which include servers, storage and networking equipment in one box) or software-defined storage to stay relevant. Moreover, it has become imperative for hardware vendors to enhance their focus on software solutions and post-sales hardware maintenance & services given that they are higher-margin businesses and have had high customer demand over the years.

Below we take a look at key growth drivers for the company that justify our $28 price estimate for NetApp , which is around 15-20% lower than the current market price. NetApp’s stock price is up by over 30% since the beginning of the year.

Storage vendors are increasingly facing competition from so-called White Box storage vendors. Over the last few years, customers are shifting preference to low-cost original design manufacturer (ODM) storage boxes, which is cutting into the addressable market for large vendors. As a result, NetApp’s share in the external storage systems market has fallen from over 13% in 2013 to 11.1% in 2015. This trend could continue in the coming years with smaller vendors gaining share from large manufacturers

Low product sales have led to discounted selling prices, which ultimately drove down product margins significantly. The adjusted gross margin for the product division has fallen from under 55.6% in 2011 to around 50.3% in 2015. This could further fall to around 47.3% in 2016.

In addition to driving the top line, the hardware maintenance and services division has also contributed positively to improving the company’s profitability. The product division’s gross margins fell by over 5 percentage points from 2011 through 2015 due to pricing pressure from smaller vendors. On the other hand, the services division’s gross margin improved by over 5 percentage points. In the long run, the services division could continue to become more profitable for the company as a large aggregate client base could lead to a higher refresh rate for maintenance contract renewals.

However, the sustained weakness in NetApp’s core product division and over-dependence on one revenue stream could be a risk going forward. As a result, we maintain our $28 price estimate for NetApp’s stock. You can modify the interactive charts in the article above to see how much the change in individual drivers such as gross margins or market share impacts the price estimate for NetApp’s stock.

Read more: http://www.nasdaq.com/article/why-netapps-stock-is-worth-28-cm686258#ixzz4Lfup88wV

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Worldwide Enterprise Storage Market Holds Steady in Second Quarter

| news.morningstar.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Total worldwide enterprise storage systems factory revenue remained flat year over year, posting 0.0% growth and $8.8 billion during the second quarter of 2016 (2Q16), according to the International Data Corporation (IDC) Worldwide Quarterly Enterprise Storage Systems Tracker. Total capacity shipments were up 12.9% year over year to 34.7 exabytes during the quarter. Revenue growth declined within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was down 21.5% year over year to $794.7 million. Sales of server-based storage were up 9.8% during the quarter and accounted for almost $2.4 billion in revenue. External storage systems remained the largest market segment, but the $5.7 billion in sales represented flat 0.0% year-over-year growth.

“After a slow start to the year, the enterprise storage system market remained steady during the second quarter,” said Liz Conner, research manager, Storage Systems. “Spending on all flash deployments continues to grow and help drive the market. The decreasing cost of flash media, coupled with increasing use cases, high density deployments, and availability of flash-based storage products, have resulted in rapid adoption throughout the market.”

2Q16 Total Enterprise Storage Systems Market Results

EMC and HPE remained in a statistical tie* for the top position within the total worldwide enterprise storage systems market, accounting for 18.1% and 17.6% of spending respectively. HPE’s year-over-year growth rate as reported by IDC was impacted by the start of the H3C partnership in China that began in May of 2016; as a result, a portion of HPE-designed storage systems were rebranded for the China market and do not count in HPE’s market data from that point forward. Dell held the next position with a 11.5% share of revenue during the quarter. IBM and NetApp accounted for 6.8% and 6.7% of global spending respectively. As a single group, storage systems sales by original design manufacturers (ODMs) selling directly to hyperscale data center customers accounted for 9.0% of global spending during the quarter.

Read more

curata__B22zKOKKUHE4Wl7.jpeg

This former EMC exec says Amazon ate his old business and it will never recover

| Business Insider
curata__B22zKOKKUHE4Wl7.jpeg

The enterprise storage market gave rise to multi-billion dollar companies like EMC (being bought by Dell) and NetApp (struggling to grow revenues), as well as storage units at Dell, Hitachi, and others. Overall, storage revenues dropped 2% in 2015, with EMC down by 5% and NetApp down by nearly 15%, IDC reports.

That traditional storage market, where companies buy specialized hardware called storage arrays to holds and manage corporate data, is never coming back, says Mark Lewis, a long-time storage exec, who was once EMC’s CTO and chief strategy officer.

There are two reasons for the death spiral, he says:

  •  Storage technology continually gets faster and cheaper.
  • Amazon changed the game.

With Amazon, companies don’t need to buy big expensive storage and store it all on their own any more.

Meanwhile Amazon itself, along with the other huge internet companies like Google and Facebook, don’t buy storage arrays.

Instead, these giants have built their own homegrown storage software that allows them to use the type of inexpensive storage used by ordinary computer servers. This lets them have faster, cheaper, and more reliable storage than the options sold by EMC, NetApp, or even relatively newer companies like Pure Storage, Lewis says.

Read more at http://www.businessinsider.co.id/amazon-ate-emc-says-former-emc-exec-2016-8/#AIryqkbteADfSqEG.99

Read more