curata__d1NS2937h9ezbqI.gif

Worldwide Enterprise Storage Market Sees Modest Decline in Third Quarter, According to IDC

| idc.com
curata__d1NS2937h9ezbqI.gif

Total worldwide enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in the third quarter of 2016 (3Q16), according to the International Data Corporation (IDC) Worldwide Quarterly Enterprise Storage Systems Tracker. Total capacity shipments were up 33.2% year over year to 44.3 exabytes during the quarter. Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion. Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

“The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, Storage Systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more

Next Generation Data Storage Technologies Market Forecast Up to 2024

| openPR.com

Next generation data storage technology includes technologically advanced data storage products and solutions to deal with increasing file sizes and huge amount of unstructured data. The next generation data storage technology manages large data securely and enables reliable, secure and fast recovery of data in a cost-efficient manner. It has enabled scalable storage and handling of large data generated by big enterprises.

The factors favoring the growth of the next generation storage technologies market include ubiquity of input and output devices in every sector and the ever-increasing need for managing, analyzing and storing huge amount of data. Consequently, the demand for next generation data technologies is expected to increase at a quick rate over the forecast period. This growth is expected to be backed by the growing demand for advanced time saving technologies including automated systems, smart technologies, online shopping, and internet of things etc. which require handling of large data generated by the enterprises.

There are various challenges restraining the growth of the next generation data storages technologies market. This includes technological complexity, repair and restore issues, lack of security etc. Furthermore, high level of data consistency is required in the data storage. Future growth in the market is projected to come from emerging need for data storage in small and medium enterprises.

The next generation data storage technologies market is segmented on the basis of technology and application. By technology, the market is classified as into all-flash storage arrays, hybrid array, cloud based disaster recovery, holographic data storage and heat assisted magnetic recording. Of these, hybrid array is a form of hierarchical storage management contains solid state drives and hard disk drives for input and output speed improvements. Holographic data storage is the high capacity data storage technology whereas hybrid array and all flash array are standard data storage techniques.

By application, next generation data storage technologies market is divided into the enterprise data storage, big data storage and cloud based storage.

North America is the dominating the next generation data storage technologies market. The Asian Pacific countries including China, Japan and India are expected to grow at a significant rate as compared to other regions. The presence of a large number of IT industries in the Asia Pacific region is one of the key factor driving growth of the next generation data storage technologies market in the region. Asia Pacific countries are speculated to make huge investments in the data storage sector to provide their existing infrastructures with new data storage technologies and solutions to improve the production process. Japan, which is one of the technology advanced nations, is anticipated to be a big market for next generation data storage technologies. The country is already using these data storage technology across its various industry verticals

Some of the key players in the next generation data technology market are Dell Inc., Avago Technologies, EMC Corporation, Hewlett-Packard Development Company, L.P., HGST, Inc., – Hitachi Data Systems, IBM Corporation, NetApp, Inc., Avago Technologies, Drobo, Inc. and Micron Technology Corporation.

Read more

Why NetApp’s Stock Is Worth $28

| nasdaq.com

NetApp (NTAP) has seen weak demand for storage hardware over the last few years, corresponding to weak IT spend across the globe. NetApp’s storage product revenues have consistently fallen over the last four years, a trend observed by many large IT hardware, telecom hardware and storage hardware vendors. Competing storage systems manufacturers EMC (NYSE:EMC), Hewlett-Packard Enterprise (HPE), Hitachi Data Systems and IBM (IBM) have also witnessed low demand for storage hardware . As a result, storage systems manufacturers are shifting their focus to fast-growing market domains such as flash-based storage arrays or converged systems (which include servers, storage and networking equipment in one box) or software-defined storage to stay relevant. Moreover, it has become imperative for hardware vendors to enhance their focus on software solutions and post-sales hardware maintenance & services given that they are higher-margin businesses and have had high customer demand over the years.

Below we take a look at key growth drivers for the company that justify our $28 price estimate for NetApp , which is around 15-20% lower than the current market price. NetApp’s stock price is up by over 30% since the beginning of the year.

Storage vendors are increasingly facing competition from so-called White Box storage vendors. Over the last few years, customers are shifting preference to low-cost original design manufacturer (ODM) storage boxes, which is cutting into the addressable market for large vendors. As a result, NetApp’s share in the external storage systems market has fallen from over 13% in 2013 to 11.1% in 2015. This trend could continue in the coming years with smaller vendors gaining share from large manufacturers

Low product sales have led to discounted selling prices, which ultimately drove down product margins significantly. The adjusted gross margin for the product division has fallen from under 55.6% in 2011 to around 50.3% in 2015. This could further fall to around 47.3% in 2016.

In addition to driving the top line, the hardware maintenance and services division has also contributed positively to improving the company’s profitability. The product division’s gross margins fell by over 5 percentage points from 2011 through 2015 due to pricing pressure from smaller vendors. On the other hand, the services division’s gross margin improved by over 5 percentage points. In the long run, the services division could continue to become more profitable for the company as a large aggregate client base could lead to a higher refresh rate for maintenance contract renewals.

However, the sustained weakness in NetApp’s core product division and over-dependence on one revenue stream could be a risk going forward. As a result, we maintain our $28 price estimate for NetApp’s stock. You can modify the interactive charts in the article above to see how much the change in individual drivers such as gross margins or market share impacts the price estimate for NetApp’s stock.

Read more: http://www.nasdaq.com/article/why-netapps-stock-is-worth-28-cm686258#ixzz4Lfup88wV

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Worldwide Enterprise Storage Market Holds Steady in Second Quarter

| news.morningstar.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Total worldwide enterprise storage systems factory revenue remained flat year over year, posting 0.0% growth and $8.8 billion during the second quarter of 2016 (2Q16), according to the International Data Corporation (IDC) Worldwide Quarterly Enterprise Storage Systems Tracker. Total capacity shipments were up 12.9% year over year to 34.7 exabytes during the quarter. Revenue growth declined within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was down 21.5% year over year to $794.7 million. Sales of server-based storage were up 9.8% during the quarter and accounted for almost $2.4 billion in revenue. External storage systems remained the largest market segment, but the $5.7 billion in sales represented flat 0.0% year-over-year growth.

“After a slow start to the year, the enterprise storage system market remained steady during the second quarter,” said Liz Conner, research manager, Storage Systems. “Spending on all flash deployments continues to grow and help drive the market. The decreasing cost of flash media, coupled with increasing use cases, high density deployments, and availability of flash-based storage products, have resulted in rapid adoption throughout the market.”

2Q16 Total Enterprise Storage Systems Market Results

EMC and HPE remained in a statistical tie* for the top position within the total worldwide enterprise storage systems market, accounting for 18.1% and 17.6% of spending respectively. HPE’s year-over-year growth rate as reported by IDC was impacted by the start of the H3C partnership in China that began in May of 2016; as a result, a portion of HPE-designed storage systems were rebranded for the China market and do not count in HPE’s market data from that point forward. Dell held the next position with a 11.5% share of revenue during the quarter. IBM and NetApp accounted for 6.8% and 6.7% of global spending respectively. As a single group, storage systems sales by original design manufacturers (ODMs) selling directly to hyperscale data center customers accounted for 9.0% of global spending during the quarter.

Read more

Global Data Center Storage Market to Grow at a CAGR of 15.01% thru 2020

| Press releases

Global Data Center Storage Market 2016-2020 Size and Share Published in 2016-07-26 Available for US$ 2500 at Researchmoz.us Description A data center storage system is a repository for storing business information or data for a period of time depending on the needs of end-users. Enterprise users can fetch these data and share them through interconnected networks or online. Data center storage comprises of SSD and HDD devices that are commonly used in SAN, NAS, and DAS environments.

Technavios analysts forecast the global data center storage market to grow at a CAGR of 15.01% during the period 2016-2020.

Covered in this report

The report covers the present scenario and the growth prospects of the global data center storage market for 2016-2020. To calculate the market size, the report considers revenue generated from the sales of storage area network (SAN), network-attached storage (NAS), and direct-attached storage (DAS) systems.

The market is divided into the following segments based on geography:

Americas

APAC

EMEA

Read more

curata__zyXLcxqnaYjUl1D.jpeg

EMC Storage Hardware Sales To Remain Suppressed, Services To Drive Growth

| nasdaq.com
curata__zyXLcxqnaYjUl1D.jpeg

Over the last couple of years, EMC has witnessed a slowdown in its core information storage business, with its subsidiary VMware ( VMW ) driving much of the growth.

The net revenues generated by EMC’s information infrastructure segment, which includes product and services revenues for storage hardware, content management and information security, fell by 6% on a y-o-y basis to $3.4 billion through March quarter. On the other hand, the combined revenues of VMware and Pivotal were up almost 7% to $1.7 billion in the same period. The trend is expected to continue through the June quarter given the weak global spending on storage hardware. The company has not provided any guidance for 2016 due to the pending acquisition by Dell , which is likely to be completed by Q3 this year.

According to IDC-reported data, EMC’s share in the storage systems market has fallen considerably over the last couple of years, a trend consistent across most large storage vendors, including NetApp, HP Enetrprise, Hitachi and IBM.

 EMC’s share in the market stood at 24.9%, roughly 5 percentage points over early 2015 levels. EMC’s revenue decline in the March quarter this year outpaced the industry-wide decline.

Correspondingly, EMC’s Information Storage revenues have declined as a proportion of EMC’s net revenues over the last few years. We forecast EMC’s information storage revenues to decline from 73% of net revenues in 2011 to about 66% in 2016 and subsequently to around 60% in 2021. Comparatively, VMware’s contribution to EMC’s top line and operating profits has increased over the past few years.

Read more: http://www.nasdaq.com/article/emc-earnings-preview-storage-hardware-sales-to-remain-suppressed-services-to-drive-growth-cm649970#ixzz4ElroHoEM

Read more

curata__94268666bb6689de67da501a638be5e2.PNG

Hyper-converged architectures receive value from deduplication process

| Converged and hyper
curata__94268666bb6689de67da501a638be5e2.PNG

As capacity becomes cheaper, some storage administrators might assume deduplication is less relevant. But in hyper-converged data centers, the technology is as important as ever.

Deduplication, the process of eliminating redundant data segments across files, brings value to all parts of the data center. It allows backup targets to approach the price of tape libraries and permits all-flash arrays to compete favorably with hard disk-based systems. Savings are not only seen in terms of capacity, but in performance due to the elimination of writes. Using a deduplication process can add more potential value to hyper-converged architectures.

Most hyper-converged architectures are hybrid, meaning they use flash and hard disk-based storage and transparently move data between those tiers. Deduping the flash storage tiers delivers the most return on an organization’s deduplication investment because flash has a higher cost per gigabyte versus hard disks. As a result, many vendors decide not to spend the compute resources required to deduplicate the hard-disk tier. The hard-disk tier is also slower and requires a more efficient deduplication process to avoid an effect on performance. This requires extra development resources as well. But if that investment is made, there is a payoff. While squeezing additional capacity out of the hard-disk tier does not deliver the dollar-per-gigabyte savings that flash does, it can help in the following ways.

Compute inefficiency: Hyper-converged systems scale capacity by adding nodes to the cluster. Each additional node typically provides a set amount of flash, hard-disk storage and additional compute resources.

Using a deduplication process helps resolve, or at least limit, the compute inefficiency problem by enabling the architecture to densely pack data on both the flash and hard-disk tiers. This density means IT does not need to add nodes as quickly to keep up with capacity demands, so the cluster may not end up with excess compute resources as quickly. There is also the physical advantage of not having to take up as much data center floor space. While storage may be cheap, new data centers are not.

Network efficiency: Hyper-converged architectures are busy frameworks of nodes. The architecture writes new data in segments, and each segment goes to a specific node. Inline deduplication identifies redundant data prior to sending it across the cluster; this increases network efficiency by the same factor as the deduplication rate.

As the cost per gigabyte of hard-disk storage — and especially flash-based storage — continues to decline, IT may regard deduplication as an unnecessary technology whose expense may not equal its potential payoff. But dedupe has other benefits, such as limiting the growth of cluster nodes and improving storage media and network performance through write elimination. Some vendors have even gone so far as to integrate data protection into their hyper-converged architectures by leveraging deduplication to make data protection nearly cost-free. Given these capabilities, deduplication is more valuable to hyper-converged architectures than ever.

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

WW Cloud Infrastructure Revenue Grows 3.9% to $6.6 Billion in 1Q16

| storagenewsletter.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

According to the International Data Corporation‘s Worldwide Quarterly Cloud IT Infrastructure Tracker, vendor revenue from sales of infrastructure products (server, storage, and Ethernet switch) for cloud IT, including public and private cloud, grew by 3.9% year over year to $6.6 billion in 1Q16 on slowed demand from the hyperscale public cloud sector.

Total cloud IT infrastructure revenues climbed to a 32.3% share of overall IT revenues in 1Q16, up from 30.2% a year ago. Revenue from infrastructure sales to private cloud grew by 6.8% to $2.8 billion, and to public cloud by 1.9% to $3.9 billion.

In comparison, revenue in the traditional (non-cloud) IT infrastructure segment decreased by 6.0% year over year in the first quarter, with declines in both storage and servers, and growth in Ethernet switch.

A slowdown in hyperscale public cloud infrastructure deployment demand negatively impacted growth in both public cloud and cloud IT overall,” said Kuba Stolarski, research director for computing platforms, IDC. “Private cloud deployment growth also slowed, as 2016 began with difficult comparisons to 1Q15, when server and storage refresh drove a high level of spend and high growth. As the system refresh has mostly ended, this will continue to push private cloud and, more generally, enterprise IT growth downwards in the near term. Hyperscale demand should return to higher deployment levels later this year, bolstered by service providers who have announced new datacenter builds expected to go online this year. As the market continues to work through this short term adjustment period, with geopolitical wild cards such as Brexit looming, end-customers’ decisions about where and how to deploy IT resources may be impacted. If new data sovereignty concerns arise, service providers will experience added pressure to increase local datacenter presence, or face potential loss of certain customers’ workloads.

 

Read more

curata__0829fcdbe39ad1fd38e48b6c0a10a2e2.PNG

Permabit offers deduplication to Linux masses

| The Register
curata__0829fcdbe39ad1fd38e48b6c0a10a2e2.PNG

Data slimming tech for Hybrid Cloud Prof Services partners

Permabit has moved beyond OEMs, making the latest release of its dedupe technology available as a Linux software package so that ISVs, professional services folks and systems integrators in its Hybrid Cloud Professional Services partners programme can use it.

Previously it was available to OEMs in Albireo (dedupe) and Virtual Data Optimizer or Virtual Data Optimizer, VDO (dedupe+compression+thin provisioning) form.

VDO v6 is designed for the cloud service provider market, Permabit says, and the VDO for Hybrid Cloud package simplifies VDO installation and configuration in Red Hat Enterprise Linux (RHEL) data centres.

Permabit says VDO is a ready-to-run kernel module for Linux, and is the only modular data reduction product available for the Linux block storage stack “that works with the broad range of open source and commercial software solutions”.

It has a block-level approach and can leverage existing file systems, volume management and data protection to deliver 4K inline, highly scalable data reduction in Linux storage environments. VDO supports block, file and object storage on RHEL and is compatible with Red Hat OpenStack, Ceph and Gluster.

A new VDO Optimizer file system is claimed to provide an up to 20x improvement in data reduction rates for existing archive and backup applications.

Howard Marks, chief scientist at DeepStorage, provided a canned quote: “Permabit’s VDO will not only optimise data on local storage but also in a hybrid cloud, significantly reducing the cost of cloud storage as well as the network load and storage ingest charges, since data is reduced before it’s transferred.”

We’re told that this new software is being evaluated by some of the world’s largest financial and communications companies as well as large government agencies.

This is a way for RHEL customers to get data reduction if their current storage hardware and software doesn’t supply it.

The latest version of VDO is available now to Permabit’s storage OEMs and members of its Hybrid Cloud Professional Services partners program. They will be announced in the near future. ®

Read more

curata__NMHO22Nh7WgXCSY.jpeg

Embracing the Increased Diversity of Storage

| itbusinessedge.com
curata__NMHO22Nh7WgXCSY.jpeg

Of the three pillars of enterprise infrastructure – compute, storage and networking – storage remains the most complex.

I know, processors are still gaining in strength and flexibility and networking is, well, networking, but in terms of options, storage is the most diverse. Do you go all-cloud, all-local, or hybrid? Do you opt for all-Flash or hybrid disk, or even tape, solutions? And then there is the rising cadre of in-memory and server-side solutions that do away with independent storage infrastructure altogether.

One thing is certain: The enterprise will need access to vast amounts of untapped storage in the coming years if it is to have any chance of realizing the benefits of Big Data and the Internet of Things. This may fly in the face of recent market data that has both the price and capacity of storage deployments on the wane, but as IDC noted in its latest quarterly assessment, this has more to do with changing buying patterns than diminishing demand. Sales of large external arrays, which represent the largest market segment, dropped by 3.7 percent, while ODM sales to hyperscale enterprises tumbled nearly 40 percent, which sounds like a lot but is largely in keeping with what has so far been a highly volatile market.

On the upside, however, both Flash-based solutions and server-side deployments are on the rise. According to new data from 451 Research, Flash is now present in 90 percent of enterprises, with more than half having already deployed hybrid SANs and another 30 percent looking to make the move within two years. Perhaps even more significant, 27 percent are running all-Flash arrays and an equal portion is planning to do the same in two years. The biggest barrier, of course, is cost, which is why many organizations are pairing their Flash systems with dedupe and compression to stretch capacity as much as five-fold.

But since storage is at heart a commodity, many enterprises make the mistake of basing their deployment decisions on technology rather than operational criteria like cost and performance. As HPCwire’s Frank Merritt notes, the primary goal for most organizations should be to build storage infrastructure with a low TCO by taking into account not just upfront costs but lifecycle factors as well. Some best practices to abide by are extensive leveraging of legacy infrastructure and deployment of new systems that stress flexibility, ease of use and, most importantly, scalability. Increased modularity is also a key attribute as it improves the value of physical data center space.

The modern storage environment, then, will be vastly different from the monolithic arrays of the past, and even the criteria for evaluating successful storage operations are shifting away from raw capacity to high degrees of flexibility and performance.

The underlying function is still the same – to keep data readily available – but the scale and scope of that challenge is changing dramatically as the enterprise transitions to the digital economy. Traditional storage architectures still have a role to play, but they are no longer the only game in town.

Read more