curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

WW Cloud Infrastructure Revenue Grows 3.9% to $6.6 Billion in 1Q16

| storagenewsletter.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

According to the International Data Corporation‘s Worldwide Quarterly Cloud IT Infrastructure Tracker, vendor revenue from sales of infrastructure products (server, storage, and Ethernet switch) for cloud IT, including public and private cloud, grew by 3.9% year over year to $6.6 billion in 1Q16 on slowed demand from the hyperscale public cloud sector.

Total cloud IT infrastructure revenues climbed to a 32.3% share of overall IT revenues in 1Q16, up from 30.2% a year ago. Revenue from infrastructure sales to private cloud grew by 6.8% to $2.8 billion, and to public cloud by 1.9% to $3.9 billion.

In comparison, revenue in the traditional (non-cloud) IT infrastructure segment decreased by 6.0% year over year in the first quarter, with declines in both storage and servers, and growth in Ethernet switch.

A slowdown in hyperscale public cloud infrastructure deployment demand negatively impacted growth in both public cloud and cloud IT overall,” said Kuba Stolarski, research director for computing platforms, IDC. “Private cloud deployment growth also slowed, as 2016 began with difficult comparisons to 1Q15, when server and storage refresh drove a high level of spend and high growth. As the system refresh has mostly ended, this will continue to push private cloud and, more generally, enterprise IT growth downwards in the near term. Hyperscale demand should return to higher deployment levels later this year, bolstered by service providers who have announced new datacenter builds expected to go online this year. As the market continues to work through this short term adjustment period, with geopolitical wild cards such as Brexit looming, end-customers’ decisions about where and how to deploy IT resources may be impacted. If new data sovereignty concerns arise, service providers will experience added pressure to increase local datacenter presence, or face potential loss of certain customers’ workloads.

 

Read more

curata__e13f61a25c08d43285fd498317512782.jpg

WD is not a disk drive company – and not a moment too soon

| storagemojo.com
curata__e13f61a25c08d43285fd498317512782.jpg

ViewInfographicWhile you weren’t looking Western Digital stopped being a hard drive company, morphing into a storage company. Such transitions are nothing new for a company that started life making calculator chips in the 1970s, morphed into SCSI, ATA and graphics in the 80s, and built its disk drive business in the 90s and 00s.

The closing of the SanDisk deal puts an exclamation point on the transition, but it started in 2011 with the acquisition of HGST. The acquirer of IBM’s disk operations has since acquired Skyera – winner of 2012’s content-free announcement award, Amplidata and server-side flash vendor Virident (wonder how integrating that with SanDisk’s Fusion-io will go?).

Amplidata’s software is the basis for the HGST Active Archive System object store. Since the web site refers to “Systems” we can expect more system products from HGST.

The StorageMojo take
It’s B-school chestnut: the railroads thought they were in the railroad business – instead of transportation – so they lost out to truckers. Cheap flash IOPS has destroyed the value of HDD-optimized array controllers – which has dramatically reduced the cost of entry into storage systems.

Add to that advent of sophisticated remote management – much advanced over 90’s “call home” features – and much of the rationale for a costly enteprise sales and support force goes away. That further lowers the market entry bar – and rips even more value out of legacy vendor infrastructure – not that Michael Dell is likely to notice for a few years.

Expect to see Seagate follow suit. Samsung and Toshiba might as well, but both are distracted by other problems.

Congratulations to the WD exec team on yet another well-executed pivot to a larger market. This will be fun to watch.

Read more

curata__Mjl19PSIaWq1R9m.png

What Large Storage Vendors Aren’t Telling You About OpenStack And Software-Defined Storage

| informationweek.com
curata__Mjl19PSIaWq1R9m.png

Simple, low-cost commodity storage devices can be used just as effectively, if not more so, than traditional proprietary, high-cost options.

Large hardware manufacturers have been building high quality, dedicated storage appliances and storage arrays for many, many years. Many enterprises have been using them successfully for a long time.

And why not? With so much structured data requiring simple, straightforward file-system storage, the intelligence integrated into these products made storage one less thing to worry about.

Then Came Cloud Computing

Cloud computing was essentially borne out of the realization that great economies of scale could be realized, along with incredible increases in service quality, by centralizing infrastructure under the control of dedicated professionals in a purpose-built data center.

What Is OpenStack?

The homepage at www.openstack.org defines OpenStack as follows:

OpenStack software controls large pools of compute, storage, and networking resources throughout a data center, managed through a dashboard or via the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.

In other words, OpenStack brings cloud computing and automation to businesses while providing enough flexibility for integration with existing and/or commodity components. It’s fair to say that the popularity of any open source project is often determined by its overall usefulness to businesses and how relevant it is to solve a particular problem. OpenStack has quickly become the fastest growing open source project in history and is arguably one of the most well-known. This alone is not only an indication of how relevant OpenStack is to addressing modern business challenges, but after nearly seven years and still gaining momentum, it’s also a testament to OpenStack’s success in actually delivering real-world results.

What The Large Storage Vendors Aren’t Telling You

Despite the fact that OpenStack and software-defined storage are ideally suited for petabyte-scale data storage, the storage appliances and arrays on which most businesses have invested tens to hundreds of thousands of dollars for decades are simply inappropriate in an OpenStack environment. They can certainly be used for legacy and traditional virtualization storage needs, but they lack the openness, flexibility, and scale-out design that OpenStack was meant to exploit.

What the storage vendors are not telling you is that software-defined storage separates intelligence from storage hardware. This means that simple, low-cost commodity storage devices can be used just as effectively, if not more so, than traditional proprietary, high-cost options. Since healthy storage for an OpenStack environment supports OpenStack’s fundamental elements and architecture, it must have flexibility to handle object data, block data, file data, image data, and more, ideally in a unified storage platform. And because the intelligence is now separated, storage hardware becomes a matter of simple COTS (commercial off the shelf) components.

Enter Red Hat

The latest OpenStack release, in combination with open source storage software such as Ceph, creates a flexible, extensible, scalable cloud operating environment. Red Hat OpenStack Platform includes 64 TBs of Red Hat Ceph storage for eachunique Red Hat customer account, so that a company’s OpenStack-based infrastructure has the right scale-out storage from day one. Companies also have the option of deploying an even broader OpenStack solution, fully integrated with the new Red Hat Cloud Suite.

To read the full article click the READ MORE button below:

Read more

curata__tPN3KXOZCbK0Yzr.png

Avnet Distributing Permabit

| storagenewsletter.com
curata__tPN3KXOZCbK0Yzr.png

        

Permabit Technology Corporation has entered into an agreement with Avnet, Inc. to distribute its SANblox data efficiency appliance.

SANblox is a data reduction appliance for FC-attached storage that pairs Permabit’s industry deduplication with its HIO/s compression technology. It enables storage products across a range of applications to provide a typical 6:1 data reduction benefit to lower the cost per gigabyte.

We constantly look for industry-leading innovation that we can integrate with IBM technology for customer solutions,” said Mark Martin, VP, Avnet’s IBM solutions business in the Americas. “Permabit’s SANblox accelerates time to value with its quick installation, enabling our partners to tighten the sales cycle and gain competitive advantage with a proven, field-tested data reduction solution.

SANblox delivers 85% reduction in cost, 6X more capacity with less than half-millisecond latency, giving customers financial relief for data growth. By integrating Permabit’s SANblox data reduction with storage solutions, Avnet enables their customers to store their most important data cost effectively.

Data reduction has become a requisite component of today’s storage solutions attracting market-leading distributors and system integrators, such as Avnet, to join forces with Permabit,” said Tom Cook, Permabit CEO. “We are pleased to be working with Avnet to make SANblox available to their partners’ storage customers who want to increase their effective capacity and lower their effective cost.

Read more

curata__93b34be0683b1a0c2be6a4d79eda1865.png

Permabit Albireo SANblox Review

| StorageReview.com
curata__93b34be0683b1a0c2be6a4d79eda1865.png

Permabit Albireo SANblox is a purpose-built data reduction appliance designed to unlock more capacity from Fibre Channel SANs. Permabit estimates users will see at least a 6:1 reduction in data footprint that resides on the SAN, allowing storage investments to massively alter the standard value proposition. SANblox offers deduplication and compression with thin provisioning, further enhancing the feature set. All of the data reduction is done inline, the SANblox appliance simply slips in ahead of and virtualizes the SAN, the SAN and the application are oblivious to the residence of the SANblox solution. SANblox works on any FC storage, regardless of disk configuration – hard drive, hybrid and all-flash solutions will all see the same reduction in data footprint.

A 6X data reduction is pretty commonly accepted as a good mark for standard enterprise mixed application workloads. Depending on the use of the storage though, the numbers can go much higher. VDI use cases can drive the benefit of SANblox skyward by an order of magnitude, and IT shops that use multiple copies of databases for development for instance will see huge data footprint reductions. In fact, simply being able to spin off copies of data for development purposes may enable new business processes, where the cost of deploying complete data sets prior may have been too high. 

For its part, Permabit has been at the deduplication business for a long time. While data reduction wasn’t widely popular outside of backup appliances until recently, flash-based appliances have driven the concept to more mainstream workloads. The deduplication technology behind many of those all-flash appliances is more likely than not a Permabit solution. Deduplication isn’t everywhere though, hard drive arrays and even most hybrids simply aren’t built with that concept in mind, and even many flash arrays offer a limited set of data reduction services. Permabit opens up these services via the SANblox appliance, giving new or existing storage a new set of tricks.

The Permabit Albireo SANblox is shipping now with an MSRP that varies depending on the storage vendor the unit is paired up with and promotional pricing. Obviously pricing arguments work out best when the capacity is large enough to get economies of scale. Permabit included a pricing example to show how traditional flash storage stacks up against an environment with SANblox:

  • Cost for 60 TBs raw: $720,000
  • Cost after data protection overhead: $12/GB
  • Cost for SANblox 6:1 pcapacity savings: $70,000 
  • Cost for 10TB after data protection overhead: $120,000               
  • Total cost before discounts: $190,000
  • Effective cost per GB (storage + sanblox) before discounts: $3.16
  • Net savings: 74%

Permabit Albireo SANblox Specifications

  • CPU: Intel Xeon E5-1650v2
  • RAM: 128 GB
  • FC Ports: 4 x 16 Gb (Emulex)
  • Max. Usable Capacity: 256 TiB
  • Max. Supported LUNs: 256
  • Random IO (4K IOPS):
    • Read: 230,000
    • Write: 111,000
    • Mixed RW70: 180,000
  • Sequential Throughput:
    • Read: 1045MB/s
    • Write: 800MB/s
  • Min Latency:
    • Read: 300us
    • Write: 400us
  • Reliability: All data/metadata is written to backend storage before writes are acknowledged. No data is cached on SANblox.
  • Availability: Seamless High Availability provides transparent failover in under 30s.
  • Serviceability: SMTP alerting and transparent upgrades of software and hardware components.
  • Physical Characteristics: 
    • Form Factor: 1U rackmount
    • Width: 17.2” (437 mm)
    • Weight: 38lbs (16.5kg)
  • Power:
    • Voltage: 100-240V, 50-60 Hz
    • Watts: 330
    • Amps: 4.5 max
  • Operating Temperature: 10°C to 35°C (50°F to 95°F)
  • Operating Relative Humidity: 8% to 90% (non-condensing) 
  • Certifications 
    • Electromagnetic Emissions: FCC Class A, ICES-003, CISPR 22 Class A, AS/NZS CISPR 22 Class A, EN 61000-3-2/-3-3, VCCI:V-3, KN22 Class A
    • Electromagnetic Immunity: CISPR 24, KN 24, (EN 61000-4-2, EN 61000-4-3, EN 61000-4-4, EN 61000-4-5, EN 61000-4-6, EN 61000-4-8, EN 61000-4-11) 
    • Power Supply Efficiency: 80 Plus Gold Certified 

Deduplication

Deduplication is simply the process of preventing duplicate data from taking up valuable space in primary storage. The difference between buying a data reduction appliance specifically for backup as opposed to one designed for primary storage may be confusing to some buyers. Primary storage with data reduction is designed to optimize delivery of performance for random access to fixed-size blocks of data. In order to hit the faster performance, primary storage data reduction focuses on fixed chunks of data, typically more, smaller blocks (however there is variation depending of the specific vendor). On the other hand a deduplication backup appliance focuses more on sequential throughput in order to speed up the backup and restore processes. Backup dedupe appliances, with their sequential focus, are able to process large streams of data and write them to media with variable chunk sizes. On the one hand this means that the appliance can use larger chunks and therefore have fewer chunks to keep track of; on the other hand, if there is a small amount of data that needs to be read back, the entire chunk must be read.

As far as deduplication goes there are two main ways in which to carry it out: inline or post-process. Inline deduplication simply means that as data moves toward its target duplicates are found and then never written. Because it is inline, caches and tiers of faster storage in hybrid arrays all benefit from an increase in effective capacity. This is ideal in both saving disk space not to mention saving writes to flash media (where flash can only take so many writes before beginning to degrade). On top of these benefits, inline also allows for immediate replication for data protection. The down side of the inline deduplication is the hit in performance at wrute tune that is almost unavoidable.

Post-process deduplication means the deduplication process begins once the data has hit its storage target or when it hits a storage cache. While this can skip the initial performance hit at write time, it does introduce other issues. For one, duplicates, while waiting for the deduplication process to begin or catch up if it is always running, are taking up storage space. If the data is sent to a cache first the cache can rapidly fill up. As a result, hybrid arrays may only see capacity savings at the lowest tier. Writing everything to the storage media first before dedupe can also take a larger toll on flash. And while the initial performance hit may be skipped the deduplication process will still have to use resources once it begins post-process.

Performance is usually the largest concern from a vendor perspective, as they don’t want their appliance running slower than their competitors (even though they will be using overall less disk space). The performance hit and the overall all cap on performance comes through a combination of resources available as well as the specific software being used within the given appliance. While performance can also be a concern for customers, and a major concern at that, they are also worried about data loss, as the deduplication process does change how data is being stored versus how it was initially written.

So where does Permabit fit in this difference of deduplication? Permabit sits in front of a SAN and deduplicates data as it moves toward its target. Permabit uses an inline, multi-core scalable, and low memory overhead method of deduplication. Looking specifically at the device we are testing, the Permabit Albirex SANblox, it can index data in need of deduplication at 4K granularity in a primary storage environment.  So the Permabit Albirex SANblox can take 256TB of provisioned LUNs and present it as 2.5TB of logical storage, yet it does so in only 128GB of RAM. This allows the device to tackle to aspects of performance both by reading back smaller chunks of data as well as using fewer resources. Another method for addressing performance with Permabit is to embed its software into an appliance. Permabit states that customers that use this method have seen performance greater than 600,000 IOPS.

It is easy for any company to say that there device (in this case deduplication) is wonderful at what it does. But it is always better when some proof can be provided in a context that can be understood by customers and by vendors looking to combine Permabit with their SAN appliances. A few years ago Permabit ran a study with Enterprise Strategy Group (ESG). The study looked at data reduction ratio on various environments and compared compression alone, deduplication alone, and compression and deduplication combined.

Setup and Configuration

The SANblox appliance is a 1U server that essentially inserts itself into the data path for LUNs that get routed through. Of course not all LUNs have to go through the SANblox. SANblox units are usually deployed in HA pairs, and depending on needs or capabilities of the underlying storage, multiple HA pairs can be used to address any storage or performance requirement. 

Getting the SANblox online is pretty quick and easy. You assign the system two  IP addresses: one for IPMI, and a second one for the web management and SSH interface. When it comes online, you grab the WWNs for the two backend FC ports (ones that will connect to your storage system) and use those to create a seperate FC zone with.

At the array level you provision your storage so that you have one 1GB LUN for device settings, and multiple LUNs for your primary data storage. All of the metadata are also stored on these volumes, the SANblox does not persist any data within the appliance, made possible by its synchronous, inline functionality. For our screenshot examples we used our DotHill Ultra48 array, configuring 1 1GB LUN for SANblox settings and 2 1TB LUNs for the SANblox storage pool.

With the storage configured the SANblox automatically detects and configures itself using the 1GB LUN for device settings and views the other LUNS for storage pool creation. In this case it groups all of them together when creating the pool and allows you to select if you want dedupe on or off, as well as compression on or off.

With the pool created the Permabit SANblox by default allows users to address the physical storage with a 10:1 logical addressable size. So 1TB raw becomes 10TB of usable space when creating volumes. In our case it mapped the 1.8TB of raw storage as 18TB of usable storage that we could assign.

With the underlying storage sorted out the rest of the interface works similar to that of your basic storage array. You can create LUNs, assign them to hosts or host groups, as well as define rules such as read-only access or read/write access.

Performance

Not all high-performance storage offers deduplication. The X-IO ISE G3 family of flash arrays are a good example, the recently reviewed X-IO ISE 860 is designed largely as a performance play. X-IO made the conscious decision to not layer on too many features, all of which require more RAM and CPU, while diminishing the ability of the array to deliver leading-edge performance. That said, there are use cases where applications must tradeoff performance for capacity and with the cost of flash still relatively high on a per TB basis, deduplication can alter the economics of performance storage dramatically enough to tackle the cost concern and retain high performance characteristics. With this as the backdrop, we deployed the SANblox in front of the ISE 860 to gauge its capabilities. With the primary focus being how deduplication affected application performance, we leveraged our Microsoft SQL Server, MySQL Sysbench and VMware VMmark testing environments to stress a single SANblox appliance. Each of these tests operate with multiple simultaneous workloads hitting a given storage array at the same time, giving a data reduction system such as the Permabit SANblox a fantastic opportunity to reduce the data footprint of the deployed workload.

One important element to understand when it comes to deduplication and performance is when you reduce your data footprint you also increase the I/O load on your backend storage. Throughput in many cases can be reduced, since you are sending much less data than before, but the small-block random I/O requests substantially increase. This is one reason DR and flash can go so well together, but it also means that at a certain point you can and will still saturate your backend storage in certain scenarios. Luckily, the SANblox’s patented technology manages data reduction overheads to a minimal amount, leaving room to scale or use the array natively for other applications. For large environments or platforms that have a lot of I/O potential, users can scale the number of SANblox appliances for increased performance and capacity. While we were given a single appliance to review, we would have most likely seen higher measured performance with two pairs working together, instead of just one.

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, being stressed by Dell’s Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our X-IO ISE 860 to better illustrate the aggregate performance inside a 4-node VMware cluster. 

Second Generation SQL Server OLTP Benchmark Factory LoadGen Equipment

  • Dell PowerEdge R730 VMware ESXi vSphere Virtual Client Hosts (2)
    • Four Intel E5-2690 v3 CPUs for 124GHz in cluster (Two per node, 2.6GHz, 12-cores, 30MB Cache) 
    • 512GB RAM (256GB per node, 16GB x 16 DDR4, 128GB per CPU)
    • SD Card Boot (Lexar 16GB)
    • 2 x Mellanox ConnectX-3 InfiniBand Adapter (vSwitch for vMotion and VM network)
    • 2 x Emulex 16GB dual-port FC HBA
    • 2 x Emulex 10GbE dual-port NIC
    • VMware ESXi vSphere 6.0 / Enterprise Plus 4-CPU
  • Dell PowerEdge R730 Virtualized SQL 4-node Cluster

    • Eight Intel E5-2690 v3 CPUs for 249GHz in cluster (Two per node, 2.6GHz, 12-cores, 30MB Cache) 
    • 1TB RAM (256GB per node, 16GB x 16 DDR4, 128GB per CPU)
    • SD Card Boot (Lexar 16GB)
    • 4 x Mellanox ConnectX-3 InfiniBand Adapter (vSwitch for vMotion and VM network)
    • 4 x Emulex 16GB dual-port FC HBA
    • 4 x Emulex 10GbE dual-port NIC
    • VMware ESXi vSphere 6.0 / Enterprise Plus 8-CPU

Each SQL Server VM is configured with two vDisks, one 100GB for boot and one 500GB for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller.

Looking at the TPS performance change between running our SQL TPC-C workload on the X-IO ISE 860 versus through the Permabit SANblox, the drop was fairly small from 12,564 to 12,431TPS.

Changing the focus from transactional performance to latency though we see the impact of data reduction in our workload. With workloads operating through the SANblox, latency increased from 13ms average to 84ms average; just under a 5.5x jump. Permabit explained that we may be nearing maximum load for a single SANblox pair and that slightly reducing the workload or adding a second SANblox could reduce the latency average significantly.

The Sysbench OLTP benchmark runs on top of Percona MySQL leveraging the InnoDB storage engine operating inside a CentOS installation. To align our tests of traditional SAN with newer hyper-converged gear, we’ve shifted many of our benchmarks to a larger distributed model. The primary difference is that instead of running one single benchmark on a bare-metal server, we now run multiple instances of that benchmark in a virtualized environment. To that end, we deployed 4 and 8 Sysbench VMs on the X-IO ISE 860, 1-2 per node, and measured the total performance seen on the cluster with all operating simultaneously. We plotted how 4 and 8VMs operated on both the flash array raw, as well as through the Permabit SANblox.

Dell PowerEdge R730 Virtualized Sysbench 4-node Cluster

  • Eight Intel E5-2690 v3 CPUs for 249GHz in cluster (Two per node, 2.6GHz, 12-cores, 30MB Cache) 
  • 1TB RAM (256GB per node, 16GB x 16 DDR4, 128GB per CPU)
  • SD Card Boot (Lexar 16GB)
  • 4 x Mellanox ConnectX-3 InfiniBand Adapter (vSwitch for vMotion and VM network)
  • 4 x Emulex 16GB dual-port FC HBA
  • 4 x Emulex 10GbE dual-port NIC
  • VMware ESXi vSphere 6.0 / Enterprise Plus 8-CPU

Each Sysbench VM is configured with three vDisks, one for boot (~92GB), one with the pre-built database (~447GB) and the third for the database that we will test (400GB). From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller.

Our Sysbench test measures average TPS (Transactions Per Second), average latency, as well as average 99th percentile latency at a peak load of 32 threads.

With Sysbench running natively on the X-IO ISE 860 with an 8VM workload, we measured an aggregate of 6,568TPS across the cluster. With the SANblox added into the mix, that dropped to 2,971TPS. With a load of 4VMs, we saw less of a drop, going from 4,424TPS down to 2,752TPS. In both cases the overhead of operating through the data reduction appliance accounted for 55% and 38% respectively. One critical aspect though this overhead figure doesn’t affect LUNs served from the storage array directly. As an external system users can opt to route higher-priority traffic to the array itself, albeit without the cost benefits of data reduction.

Comparing average latency between our configurations, we saw 4VM average latency increase from 29 to 47ms, while 8VM average latency picked up from 39 to 86ms.

Looking at 99th percentile latency with the SANblox added into our environment, we measured an increase from 57 to 89ms with 4VMs and 83 to 178ms with 8VMs.

VMmark Performance Analysis

As with all of our Application Performance Analysis we attempt to show how products perform in a live production environment compared to the company’s claims on performance. We understand the importance of evaluating storage as a component of larger systems, most importantly how responsive storage is when interacting with key enterprise applications. In this test we use the VMmark virtualization benchmark by VMware in a multi-server environment.

VMmark by its very design is a highly resource intensive benchmark, with a broad mix of VM-based application workloads stressing storage, network and compute activity. When it comes to testing virtualization performance, there is almost no better benchmark for it, since VMmark looks at so many facets, covering storage I/O, CPU, and even network performance in VMware environments. 

Dell PowerEdge R730 VMware VMmark 4-node Cluster Specifications

  • Dell PowerEdge R730 Servers (x4)
  • CPUs: Eight Intel Xeon E5-2690 v3 2.6GHz (12C/24T)
  • Memory: 64 x 16GB DDR4 RDIMM
  • Emulex LightPulse LPe16002B 16Gb FC Dual-Port HBA
  • Emulex OneConnect OCe14102-NX 10Gb Ethernet Dual-Port NIC
  • VMware ESXi 6.0

ISE 860 G3 (20×1.6TB SSDs per DataPac)

  • ​Before RAID: 51.2TB
  • RAID 10 Capacity: 22.9TB
  • RAID 5 Capacity: 36.6TB
  • List Price: $575,000

When configuring the Permabit SANblox for testing with VMware’s VMmark, we optimized the way data was distributed. Traditionally with a given array, VMs are deployed in an “all or nothing” configuration, meaning that data in full is moved completely onto the storage array being tested. With the SANblox, the unique way it sits in front of the storage device, we were able to leverage the storage directly for some write-intensive workloads, as well as through the SANblox for the majority of the OS disks and VMmark workloads where deduplication savings were greatest.. In our specific configuration we migrated all VMs onto the SANblox, with the exception of the individual 40GB Mailserver mailbox vDisks, which we positioned on the X-IO ISE 860 directly.

With our optimized configuration, we were able to reach a total of 8 tiles with VMmark using the Permabit SANblox in front of the X-IO ISE 860. This compared to a peak of 26 tiles we measured directly hosted on the array prior. From a performance standpoint, running our workload through the SANblox had an overhead of 70%. In terms of data reduction though, the space consumed stayed flat at 1 tile. Migrating additional tiles onto the array had no appreciable impact on space consumed. This is one scenario where having a 2nd HA pair of SANblox appliances would improve overall performance.

Conclusion

The Permabit Albireo SANblox is an easy to deploy appliance that offers tremendous benefit by vastly reducing an organization’s data footprint. Permabit states that the Albireo SANblox can be dropped in front of any Fibre Channel SAN and customers can see up to a 6:1 reduction in data footprint. All of the data reduction happens inline with the SAN unaware that the SANblox exists. Along with the typical 6:1 data reduction the SANblox also offers thin provisioning and compression. Permabit is a longstanding and widely respected name in deduplication and can help customers see potential huge footprint reductions depending on the workload. 

On its face, deduplication sounds wonderful. Organizations can take full advantage of their purchased storage instead of letting it get filled up by duplicates and even older disk-based storage can find new life. The fact that the Permabit Albireo SANblox works regardless of the configuration that follows it is another shining reason to think about it. The biggest drawback to deduplication is that performance must take a hit, in some instances the hit to performance can be pretty significant. Instead of viewing this as a deal breaker potential customers shoudl realize that while performance compared to raw all-flash takes a hit, this is still faster than traditional HDD storage arrays that play in the similar price bracket.

If ultra high performance and extremely low latency is needed more than utilizing all of their storage investment, then they should skip over deduplication. However, if an enterprise can take the performance hit and still function within their defined parameters then by all means they should look into a device such as the Permabit Albirex SANblox. There is a compromise as well, a third option would be to run the less performance critical data (such as development) through SANblox, while letting production data go through with no deduplication. A similar line of thinking needs to go into how one looks at our performance results. The comparison is less of “look how much better the X-IO performs without the SANblox” and more of a way to present the type of performance one could expect when applying deduplication to their SAN. 

As noted, the addition of the appliance to a storage stack depends on a number of variables. Ultimately what Permabit offers is an extension of storage capacity and longevity, especially where the workloads don’t have a performance need. In today’s IT environment where tasks like spinning up databases often for development is becoming standard practice, the SANblox enables this with no data footprint penalty. Integration into the enterprise is simple as well and should tuning and customization be required, the appliance allows for such. 

Pros

  • Simple integration into storage architecture
  • Aligns with modern development practices
  • Can be turned on/off by LUN

Cons

  • Deduplication has overhead, latency sensitive applications may need to bypass the appliance

Bottom Line

Permabit Albireo SANblox easily integrates into existing systems and performs inline data reduction enabling organizations to utilize the full potential of their storage investments. The data reduction can be turned on or off, or apply to only certain workloads in order to maximize both performance and capacity.

Permabit Albireo SANblox Product Page

Discuss this review

Sign up for the StorageReview newsletter

Read more

wpid-curata__p5xvTsGDOiaCaQ5.jpeg

Permabit SANblox Among Finalists for 2015 Products of the Year Award

| Yahoo Finance
wpid-curata__p5xvTsGDOiaCaQ5.jpeg

 Permabit Technology Corporation, the recognized leader in data reduction technology, today announced that it is among nine finalists vying for the Storage magazine and SearchStorage.com’s 2015 Products of the Year competition in the data storage management tools category.  The company was honored for its  SANblox™ data reduction appliance.

SANblox is a unique data reduction appliance for Fibre Channel-attached storage that pairs Permabit’s industry-leading deduplication index with its best in class HIOPS Compression™ technology. It provides inline data reduction across a wide range of applications, including virtual server, VDI, copy data, containers, databases (OLTP and data warehouse), analytics and Big Data environments.

SANblox can be deployed in a matter of hours in any storage (all-flash, hybrid, or HDD) environment. By instantly increasing the effective capacity and reducing the effective cost of storage, SANblox inline data reduction delivers immediate savings.

“We are thrilled to be among the finalists for one of the industry’s most prestigious product of the year awards,” says Tom Cook, Permabit CEO. “TechTarget’s annual recognition of the top products in data storage is highly regarded among vendors, channel partners and customers alike and to have SANblox selected as a potential winner is a testament to the importance of implementing cutting-edge inline deduplication, compression and thin provisioning capabilities in all SAN deployments.”

Read more

techtargetlogo

Data storage vendors in the hot seat in 2016 outlook

| searchstorage.techtarget.com
techtargetlogo

A tumultuous 2015 places data storage vendors under great pressure heading into 2016. With worldwide networked storage sales declining, the cloud eating into on-premises purchases, and the public markets unclear, the entire industry faces an uncertain year.

But for a dozen or so data storage vendors — including a few of the biggest out there — 2016 will be especially pivotal.

Merged and divided: EMC/Dell, Veritas, HPE

EMC and Dell stand at the center of the storage universe heading into their $67 billion merger. Because the deal won’t close until late 2016, customers, competitors and Dell and EMC employees will spend most of the year wondering what the combined company will look like. No one can be sure what products will stay and what will go, and every bit of news coming out before and immediately after the close will be closely scrutinized for hints of the bigger picture. Historically, such uncertainty ahead of huge mergers slows those companies’ sales. That could leave an opening for rivals this year.

The storage world reacted well to Symantec’s decision to spin off its backup and storage management products into a new Veritas. Veritas on its own can concentrate on storage rather than serve as part of a larger security company. But the Carlyle Group, which bought Veritas for $8 billion last August, is a newcomer to storage. Veritas has been operating as a separate company from Symantec since October, and the Carlyle deal is expected to close in January. Because the Carlyle Group has no track record in storage, it might take much of 2016 before we get a feel for its long-term plans for Veritas.

Storage will play a bigger role inside Hewlett Packard Enterprise (HPE) than it did before HP broke into two companies. HPE CEO Meg Whitman says she is excited about the 3PAR array platform, which has been a strong performer among a laggard HP storage business for years. Now HPE has to prove it can put enough strong storage technology around 3PAR to take advantage of the EMC-Dell uncertainty and buck the trend of declining HP storage revenue.

IBM, NetApp, CommVault — big data storage vendors on the decline

IBM’s storage business has been in a freefall for years. There is an opening for Big Blue to rebound in 2016, though. IBM has done better in the flash market than with disk, and the appetite for all-flash arrays is growing rapidly. Then there is the expected pause in EMC sales ahead of the Dell merger. Still, a storage resurgence would require a U-turn from IBM.

NetApp had a rocky 2015, changing CEOs from Tom Georgens to George Kurian in April after a string of seven poor sales quarters. NetApp has been hurt by lack of a pure all-flash platform, and customers have resisted the disruptive upgrade required to switch from its flagship Data ONTAP operating system to the clustered Data ONTAP version. Kurian made his first big move in December, acquiring all-flash startup SolidFire for $870 million. NetApp’s main focus entering 2016 is around hybrid cloud implementations, but it’s hardly alone there.

Backup vendor Commvault was considered a rising threat to giants such as EMC and Symantec for years, steadily growing its revenue in double digits year over year. That growth stopped in mid-2014, and the vendor’s revenue declined overall in 2015. Commvault moved to make its software less expensive and less complicated after finding itself competing with larger data storage vendors that were able to cut prices to compete, and smaller, focused vendors such as Veeam and Actifio. It dropped the Simpana brand with its latest release, placing its 2016 hopes on the newly released Commvault Data Platform.

Nimble and Pure at the crossroads

Nimble Storage was hitting on all cylinders after becoming a public company in late 2013 until its revenue fell below its forecast for the third quarter of 2015. Overnight, Nimble’s stock price was cut in half as it pushed back its target date for profitability. Nimble got stung by the lack of an all-flash array, which it is scrambling to design in hopes of righting the ship.

All-flash vendor Pure Storage showed strong revenue growth in its first quarter as a public company, and is among the top three in all-flash market share with EMC and IBM. But Pure continues to lose money at an alarming rate ($28.1 million in the most recent quarter) and doesn’t expect to be profitable until 2018. And as Nimble found out, one bad quarter can do a lot of damage to a young public company.

Kaminario, last of the flash startups

The industry was overflowing with pure all-flash startups a few years back but no more. Violin Memory and Pure Storage went public, XtremIO, Texas Memory Systems, SolidFire, Whiptail and Skyera were acquired, and Nimbus vanished. That leaves Kaminario as the only all-flash private vendor entering 2016. Kaminario could be a tempting acquisition target, or it could be on the road to oblivion if it doesn’t drastically increase its footprint in the flash market in 2016.

Last chance for Violin, FalconStor, X-IO?

Violin helped create the flash storage market, but struggles to sell any arrays now that the market is taking off. Violin had $6.3 million in product revenue during its last full quarter in 2015, and has hired bankers to pursue strategic alternatives. The money-losing vendor likely needs to be acquired or find a deep-pocketed partner to survive.

FalconStor tried unsuccessfully to sell itself a few years back, and now is trying to revive its hopes around a new FreeStor data protection and storage management platform. FalconStor has been lining up OEM partners, but it will need a significant revenue jolt in 2016 to have a long-term future.

X-IO Technologies named Bill Miller CEO and shifted its strategy in 2015. That’s not really news — the company has had three CEOs in four years and 10 in its 20-year history. What is notable is the vendor stacked its ISE storage blocks into a full SAN platform called Iglu in a directional change. The Iglu Blaze scale-up system launched in July. A scale-out Iglu Inferno was expected in 2015 but hasn’t made it out yet. Iglu might be the final chance for this decades-old vendor that continues to lose money.

Read more

wpid-curata__01c643ae795bb985ec17a818562a1cbe.jpg

Storage suppliers: Who reached for the stars, who burned up in orbit?

| The Register
wpid-curata__01c643ae795bb985ec17a818562a1cbe.jpg

Storage year in review, Winners, losers, refugees, death, near-death, and a miraculous recovery … all these were things that characterised the year for storage suppliers in 2015. They experienced earthquake-level changes as the movement of tectonic storage plates like flash, the cloud, server-based storage and activist investors shook old assumptions to the core.

Cash gushed from VCs into startups, and CEOs came and went in a year of titanic changes climaxing with EMC running (or about to run) into the arms of Dell in … but there was more to come.

One acquisition that did not happen was Cisco buying a storage business. Some still expect it to happen.

There were two massive de-mergers, as HP split into HP (printers and PCs) and HPE (enterprise HW, SW and services), signalling Meg Whitman’s biggest attempt to kick the resistant, obstructive HP back into growth, with the dreadful cackle of ousted CEO and PC spinoff-promoting Leo Apotheker’s laughter heard off stage. Meanwhile, the post-Autonomy acquisition debacle continued in lawsuit hell.

Symantec split into a private-equity-owned Veritas storage business and the original Symantec security business. In both the HP and Symantec cases we’re now asking, “Okay, you split. What’s next? You got your dedicated management focus. Are you going to run the existing ship better or do something new?”

Spectacular, steady state, dead, near-dead, and back from the dead

Veeam had a stand-out spectacular year with great growth in revenues and customers as CEO Ratmir Timashev continued giving electroconvulsive therapy to what had been a staid backup market. The end-point backup market saw both Code42 and Druva moving ahead, adding security and integration with central data centres.

Tarkan Maner’s Nexenta made good progress. There were relatively steady state suppliers such as Brocade, DataCore, X-IO and SpectraLogic, with DataCore possibly having a spectacular benchmark result looming using its parallel IO technology.

SpectraLogic continued showing everyone else how you manage a supposedly declining tape market business by adding integrated disk drive-based products such as Arctic Blue.

Commvault found itself in trouble but thinks it’s ready for an upturn after fixing lots of problems.

Coraid, the supplier of AoE protocol storage, died. AoE inventor Brantley Coile started up his own business to resurrect the technology’s development, and OutpaceIO combined a French System integrator with a Coraid support organisation in Georgia to support existing Coraid customers and develop the technology towards becoming a unified, multi-protocol, storage back-end.

There are three not-yet-dead-and-hanging-on suppliers. Violin Memory is one and it has had a colossally bad year, and is now effectively up for sale.

Summing up

So, that was storage in 2015, at the media, systems, applications, vision and supplier levels. What a thrilling ride! What a roller coaster!

Storage is now a vastly more complicated game, with mainstream incumbent suppliers failing to dislodge upstart newcomers and their technology, but not rolling over before them either.

The old simple monolithic or dual-controller array model has given way to a much wider spectrum of storage product tech, from server-centric VSANs and HCIAs, to enterprise arrays, scale-out filers, object storage, cloud storage, Big Data (HDFS) storage, all-flash arrays, hybrids arrays, and software-only storage.

The three driving trends forcing change during the year have been flash, the cloud, and multiple forms of storage SW aimed at fixing silo sprawl and other ills. It has been one of the most challenging years in history for storage suppliers, one of the most creative for storage startups, and a difficult one for customers, because the industry and its technologies are in turmoil with no clear way forward visible yet.

Perhaps 2016 will change that. But this is storage and clarity is a rare commodity

Read more

AWS

Amazon Starts 2016 With Three Price Cuts

| informationweek.com
AWS

The race to zero continues!

Amazon Web Services started the new year with a round of price reductions on popular EC2 instance types. The C4, M4, and R3 were all reduced 5%, effective Jan. 1, for On-demand, Reserved, and Dedicated host instances.

The price reductions applied to AWS GovCloud as well as its most popular commercial installations: US East (Northern Virginia), US West (Northern California and Oregon), Europe (Dublin and Frankfurt), and Asia Pacific (Tokyo, Singapore, and Sydney).

Amazon, already the largest cloud supplier, appears ready to continue the competition with the likes of Microsoft, Google, and IBM, each of which would like to enlarge its own base of cloud customers. In some reports, Microsoft and IBM are growing faster than AWS percentage-wise but are starting from much smaller customer bases.

The last time Amazon lost the lead in cloud service price reductions was March 2014, when Google led the charge, matched by Amazon Web Services and Microsoft a few days later. Since then, neither Google nor Microsoft, despite their deep profits, have provoked another round of cuts by leading the reductions themselves. Both try to stay in step, however, with Amazon pricing.

If Amazon intends to continue cutting prices through 2016 on either additional instance types or its most frequently used instance types it’s bad news for competitors. It means that Amazon isn’t feeling the pain that markets sometimes inflict on companies that show slender or nonexistent profits. Amazon’s cloud unit, AWS, is profitable, but the company as a whole averages a little better than break-even as it continues to expand and make heavy investments for the future.

 

 

Read more

wpid-curata__yntHRg8senDqmDF.png

Red Hat: Taking The Market By Storm

| Seeking Alpha
wpid-curata__yntHRg8senDqmDF.png

Red Hat Inc. (NYSE:RHT) is undoubtedly the star pick among software stocks and rightly so based on its highly impressive Q3 results. We reiterate RHT with Outperform rating and a Target Price (TP) of $85 per share based on Red Hat’s strategic investments in the growing realm of cloud computing, its upselling abilities, its continued growth of its core products along with the increasing demand for LINUX.

  • Red Hat Inc. enjoyed a solid 3Q15, reporting $523.6 million in revenue and $0.48 in EPS, versus the expected $521.5 million and $0.47 respectively.
  • The impressive results were driven largely by continued demand for RHEL, along with cross selling of emerging technologies.
  • We believe that company’s strategic investments in R&D especially in the field of enterprise hybrid cloud computing will become the key growth catalyst.
  • This leads us to reiterate Outperform rating on stock with a TP of $85 per share.

RHEL, the gem of Red Hat

Red Hat Enterprise Linux (RHEL) emerged once again as the real gem for Red Hat Inc., as it provided subscription revenue growth of 18% year-over-year in constant currency terms. Not only demand surged up for new subscriptions of RHEL but the solution was once again successful in bringing home a good bulk of recurring revenues in support and maintenance. Red Hat also expanded its relationship with Lenovo to provide OpenStack Platform with Lenovo hardware. This partnership is a step forward in enhancing OpenStack’s potential for more secure, reliable and flexible solutions especially at the time when the market eagerly looks towards building hybrid cloud deployments.

In addition to RHEL’s strong score in the quarter, 70% of the top 30 deals struck by the organization in the 3Q included one or more components of application development and emerging technologies, enabling the subscription revenue of application development and new technologies to grow by 48% year over-year in constant currency. Thus, reaffirming the growth potential of Red Hat’s product line along with providing additional channels of revenue growth.

The Cloud World

Along with increase in demand of RHEL and application development, the company has a forward-looking approach as it aims to strategically encourage the trend and market of enterprise (hybrid) cloud computing solutions. It is a sure shot analysis that after the global success and adoption of cloud storage, the next phase of cloud growth will be driven by enterprise adoption of hybrid cloud. Enterprise cloud computing is such a solid value-proposition that can transform the business and operational dynamics of IT industry. Moreover, it would become a long-term revenue-generating medium for Red Hat, and Red Hat is eager to bank on that. We believe that these factors make RHT the eye candy of financial market especially considering its expanding portfolio of products, and its unparalleled experience in enterprise solutions that can help it to exploit the market potential from the growing adoption of OpenStack and the cloud-computing trend.

Another factor to consider in this regard is that Red Hat is a visionary industry leader and the move to launch a strategic partnership with Microsoft in cloud computing is an evidence of this futuristic vision. Both Microsoft and Red Hat have set the field for the advancement of hybrid cloud world and are working together towards the common goal of enabling enterprises to realize the benefits of this transformation of IT.

Expectations for the future

Hands down, RHT is a long-term investment proposition and the one with promising fruits. Here are the three-fold reasons as to why Red Hat has got the growth momentum for FY 16-17.

For one, RHEL’s demand is expected to grow in long term as enterprises continue to adopt Linux as the core-operating platform to enable the integration of connecting applications and technologies. This comes at a time when Red Hat is already in the driver’s seat and dominates the Linux server market with more than 70% market share as of the most recent quarter and shows full acceleration capabilities to continue expanding its market share in the coming quarters.

· Secondly, advancements in enterprise cloud computing will turn to be real game changer for RHT, given it provides the edge to customers like it has done with RHEL and RHEV. Throw in the firm’s competency and growing presence in cloud based software solutions like OpenStack and OpenShift and you have got the eye-catcher among the software stocks.

· Last but not the least is the core competency of RHT where it continues to promote the growth of open source development. This is particularly important as despite of increasing competition RHT upholds its dedication towards the growth of this medium and continues to share its source code. Consequently the medium has benefited the organization in many ways and the most obvious one is that the increasing adoption of open source in mainstream IT has enabled Red Hat to impressively upsell its newer products like Gluster storage, JBoss and virtualization, thus driving the billings growth. Moreover, unlike traditional software companies like Oracle Corp. (NYSE:ORCL), which follow the closed software development models, Red Hat Inc.’s open source model has provided it the edge to acquire the extra speed with which to innovate.

Read more