Data Efficiency in the News


Top Priority for IT Investments: Improve Service to Quickly Meet Business Needs

| Stock Market

The research, conducted across 125 IT decision makers in the US, revealed the number one priority for IT investments: Improve service to quickly meet business needs. Reducing risk was the second major priority, and the third, as stated by respondents, was realizing higher levels of performance to support mission-critical applications.

“The findings of this research clearly indicate that the number one priority for IT decision makers is ensuring that IT becomes an enabler of business and not a hindrance,” said Joshua Yulish, president and CEO of TmaxSoft, Inc. “To this end, IT must provide open systems that afford greater flexibility and speed at a lower cost. Businesses are looking to not only improve service to respond to business needs, but innovate faster and realize higher levels of performance to support key objectives.”

Dave Lasseter, VP Power Systems Sales at Mainline, an IBM and TmaxSoft partner, added: “These findings mirror what we are seeing in the market today. IT must take the initiative in delivering solutions and services that support innovation and enable the business to adapt to changes in strategy, market conditions, and regulatory requirements.”

Key findings include:

  • The top priority among 24% of respondents was improving service to dynamically respond to business needs.
  • The second #1 priority was ensuring uptime, cited by 21%, and third was the need to reduce the administrative cost and burden by consolidating systems, cited by 19% of respondents.
  • The top-rated second priority for IT decision makers was reducing risk (identified by 21% of the sample), followed by realizing higher levels of performance to support mission-critical applications (18% of respondents).

Not only are open systems a requisite. There is also a need for Linux based data reduction that can deliver enterprise wide operating and storage efficiency. This will result in lower data bandwidth needs, improved data density (fewer servers and storage devices) and reduce data center footprint which will improve operating efficiency.


Read more


OpenStack expands both its customer reach and deployments size

| ZDNet

In 451 Research’s recent report on OpenStack adoption among enterprise private cloud users, they found that 72 percent of OpenStack-based clouds are between 1,000 and 10,000 cores and three-fourths choose OpenStack to increase operational efficiency and app deployment speed.

They also found that OpenStack is not just for large enterprises. Two-thirds of respondents (65 percent) are in organizations of between 1,000 and 10,000 employees.

The survey also uncovered that OpenStack-powered clouds have moved beyond small-scale deployments. Approximately 72 percent of OpenStack enterprise deployments are between 1,000 to 10,000 cores in size. Additionally, 5 percent of OpenStack clouds among enterprises top the 100,000-core mark. So, while OpenStack may be expanding its reach into smaller companies, it’s being used for larger deployments.

Curiously, OpenStack users are adopting containers at a faster rate than the rest of the enterprise market with 55 percent of OpenStack users also using containers, compared to 17 percent with other cloud users. What’s odd about this, as Mark Shuttleworth, founder of Canonical and Ubuntu, pointed out to me at an OpenStack Summit meeting, is OpenStack is not especially well-suited for containers.

Well, not yet anyway. But, it will be with companies both moving the technology forward and customers demanding it.

OpenStack is also moving along to real enterprise workloads rather than just testing and development work. These include infrastructure services (66 percent), business applications and big data (60 percent and 59 percent, respectively), and web services and ecommerce (57 percent).

You’ll also find OpenStack clouds running in a wide variety of businesses. While 20 percent cited the technology industry, manufacturing (15 percent), retail/hospitality (11 percent), professional services (10 percent), healthcare (7 percent), insurance (6 percent), transportation (5 percent), communications/media (5 percent), wholesale trade (5 percent), energy and utilities (4 percent), education (3 percent), financial services (3 percent), and government (3 percent) were all represented.

Why are so many businesses across so many industries adopting OpenStack? Simple. Increasing operational efficiency and accelerating innovation/deployment speed are top business drivers for enterprise adoption of OpenStack, at 76 and 75 percent, respectively. Supporting DevOps is a close second, at 69 percent. Reducing cost and standardizing on OpenStack APIs were close behind, at 50 and 45 percent, respectively. In addition to operational efficiency, data efficiency is also becoming table stakes in today’s data center. Not just to reduce effective storage costs but also to increase data density which reduces /eliminates data center expansion. The bottom line is that openstack and its efficiency impact  help business bottom line and that’s why the adoption is increasing.

“Our research in aggregate indicates enterprises globally are moving beyond using OpenStack for science projects and basic test and development to workloads that impact the bottom line,” said Al Sadowski, 451 Research’s research vice president. “This is supported by our OpenStack Market Monitor which projects an overall market size of over $5 billion in 2020 with APAC, namely China, leading the way in terms of growth.”

Mark Collier, COO of the OpenStack Foundation, agreed, “The research [is] telling us that OpenStack is not merely an interesting technology, but it’s a cornerstone technology. Companies are using OpenStack to do work that matters to their businesses, and they’re using it to support their journey to a changing landscape in which rapid development and deployment of software is the primary means of competitive advantage.”

Read more


Open Software Defined Storage needs Data Reduction

| Storage Swiss

Software defined storage (SDS) promises to abstract storage services from storage hardware, freeing organizations from the “lock” of having to use specific storage hardware. But the price for this “freedom” is imprisonment to a single storage software vendor. While certainly an improvement, it is not the panacea that SDS vendors promote. Open SDS takes data center flexibility to the next step. With Open SDS a customer has not only the flexibility to select the storage hardware they want, they also have flexibility on the software side of the equation. Examples of these more flexible SDS solutions are Red Hat’s SDS solutions CEPH and Gluster.

Flexible storage software is not only good for customers, it is also good for the third party vendors looking to add value to an existing SDS feature set. One can imagine an app store type of concept where Open SDS vendors can present a variety of extensions to the current SDS capabilities. In this scenario everyone wins. The vendor that owns the Open SDS support can offer more reasons for the customer to do business with them, the third party vendors can respond faster to customer needs because they don’t have to recreate an entire storage software stack, and the customer wins because they should see greater variety and innovation in the options available.

Data Reduction for CEPH/Gluster

An excellent example of the innovation that an Open SDS solution can foster is found in Permabit’s recent update of its VDO product. VDO or Virtual Disk Optimizer is a deduplication and compression solution and the latest version of VDO is tested and certified with Red Hat CEPH and Gluster. VDO has been on the market for over five years, and we did an extensive test of the solution a couple of years ago. Thanks to its OEM roots, VDO is potentially the most heavily tested data reduction solution on the market today. To learn more about CEPH and Gluster see our article “Product Analysis: Open Software Defined Storage – Ceph or Gluster?”.

Permabit designed their data reduction solution to provide high performance, so high as to not interfere when used in all-flash configurations. Bringing data reduction to CEPH and Gluster opens up a world of possibilities for data centers. Imagine a CEPH solution with all-flash, or mostly flash, supporting a virtual or containerized environment. Or a Gluster solution running parallel analytics processing on IoT data hosted on flash storage. Both approaches with Permabit VDO will also substantially increase data center density, another win for data center managers! While both CEPH and Gluster have supported flash for a long time, Permabit data reduction technology makes their use much more competitively appealing from a price perspective without impacting performance.

Pricing That Makes Sense

One aspect of third party solutions that usually ruins their adoption is the price. Most of the time these vendors want so much for their solution that IT professionals decide to take a pass. Permabit is a clear exception here. It is pricing the solution at an almost unheard of $199 per 16TBs of data and $3000 for 256TB of data with annual maintenance at the same price points. Using Permabit’s very conservative 2.5:1 efficiency claim, the 256TB license/maintenance costs would then enable 256TB of storage to act like 640TBs of capacity representing 384TBs in savings. Last time I checked you can’t get 384TBs for $3000.

StorageSwiss Take

We are not sure who the biggest winner is here. Red Hat, especially as they mature and simplify CEPH/Gluster, have another advantage over many other SDS solutions on the market today. It is now Open and Optimized. Permabit’s VDO at these price points should be an absolute no brainer for CEPH/Gluster customers. And customers are getting more than double their storage, without impacting performance for a fraction of the cost of actually buying it.

Read more


Alternative storage options: Ceph object storage, Swift and more

| VMware information, news and tips

Alternative storage options: Ceph object storage, Swift and more by Object storage is rapidly replacing proprietary SAN filers as the go-to choice for storage in the modern data center. But is it right for your virtual environment?

Object storage is changing the data center. Commodity storage offerings provide a well-performing alternative for expensive proprietary SAN filers.

There are currently three different products for object storage dominating the market: the legacy Swift, Amazon Simple Storage Service (S3) and the more recent Ceph object storage offering. Swift is mostly used in OpenStack cloud environments and works with applications that address Swift object storage through direct API calls. That means it’s fairly limited in use: If you have a generic application or OS, there’s no easy way integrate with Swift.

S3 has been around for a long time and works in Amazon cloud environments. Its access methods are limited as well, which means it’s not the best candidate for a generic object storage product. S3 is best used to deploy images in an Amazon Web Services cloud environment. Unfortunately, this isn’t helpful if you’re using VMware vSphere.

Ceph is the most open of all the object storage offerings, not only because it’s open source, but also because it offers several different client interfaces:

API access. This is the most common access model in object storage, but it doesn’t work for VMware environments, as you would need to rewrite the vSphere code to access it.

  • The Ceph file system. This is a special-purpose file system that can be used on the object storage client. Since this object storage client would be an ESXi server, this option also isn’t very usable in VMware environments.
  • The RADOS Block Device. This adds a block device to the client OS by loading a kernel module and integrating it on ESXi; this is also difficult to use in a VMware environment.
  • The new iSCSI interface. This is a new and promising development in Ceph object storage. In the new iSCSI interface, the Ceph storage cluster includes an iSCSI target, which means the client will access it like any other iSCSI-based SAN offering.

Of these four access methods, the iSCSI interface is the only one that really works in a VMware environment. You may be wondering, doesn’t that just replace one SAN product with another? The answer is absolutely not. Even if the client only sees an iSCSI target, you’ll be dealing with a flexible, scalable and affordable SAN offering on the back end, which is much cheaper than traditional SAN environments.

The iSCSI target interface for Ceph object storage is relatively new, and you’ll notice it may not be available on all Ceph object storage products. It is included in Ceph’s SUSE-supported offering, SUSE Enterprise Storage 3, and it is likely that other Ceph vendors, such as Red Hat, will soon follow suit. The iSCSI interface code shows in SUSE first because SUSE is its main developer.

Since Ceph object storage is revolutionizing the world of enterprise storage, it might be a good idea to take the time to explore its possibilities, especially in VMware vSphere environments. Once configured, it will behave just like any other iSCSI data store.

Read more


Open source no longer scares the enterprise


Open source breaks the rules on corporate procurement, but developers never play by the rules and now open source has sneaked in through the back door.

A study by Vanson Bourne for Rackspace reports that businesses are making big savings by using open source.

In the survey of 300 organisations, three out of five respondents cited cost savings as the top benefit, reducing average cost per project by £30,146.

With most IT projects at the lower end of the cost scale, such savings are significant.

About half of the organisations in the study reported greater innovation because of open source – and 46% said they used open source because of the competitive opportunities.

In fact, 30% said open source gave them the opportunity to respond more quickly to market trends as a driver. Almost half (45%) said it enabled them to get products and services to market sooner – with project lifecycles reduced by an average of six months.

Businesses are used to dealing with the major IT providers and some open source software companies have successfully mimicked commercial models to sell open source to the enterprise.

“Red Hat sells the perception of decreased risk, and it looks similar to proprietary software sales,” said Lindberg. “This was what most corporate procurement people are used to.”

But he added that, in his experience, open source tended to creep in from the bottom through unofficial routes. “It doesn’t require permission or payment. You can simply start using it to deliver business value. It is about people just trying to be more efficient at doing their jobs.”

According to Lindberg, it is a huge indictment of the traditional commercial software business model that no one wants to use the very expensive software stacks that corporate IT used to deploy. “People say they can do things better, faster, cheaper and more efficiently using community-oriented open source software,” he said.


Read more


3 Reasons Why An OpenStack Private Cloud May Cost You Less Than Amazon Web Services (AWS)


IT organizations are moving to the public cloud in droves to take advantage of cost savings and efficiency improvements over traditional on premise datacenters. The public cloud offers the promise of on-demand self-service for developers and business owners, pooling of resources to improve utilization and the ability to scale applications very quickly. Companies like , Google  and Microsoft  have developed robust public cloud solutions, strong developer communities and broad vendor ecosystem support for their offerings. These companies and other public cloud vendors are reaping the rewards of the mass exodus out of the datacenter and into the cloud.

One example of a private cloud solution gaining significant traction is OpenStack, which has become a de facto standard for open-source based private clouds. With the OpenStack Summit taking place in Barcelona this week, it is interesting time to reflect at how robust OpenStack has become since the project’s inception over 6 years ago. OpenStack is now backed by some of the world’s leading technology infrastructure providers like Cisco Systems , Dell ,EMC , Hewlett Packard Enterprise, IBM, Intel and Lenovo.

According to the latest OpenStack User Survey released last week, the share of OpenStack deployments in production is 20% greater than a year ago, with 71% of clouds in production or full operational use. In addition, the latest survey showed that 72% of those surveyed said their number one business driver for deploying OpenStack was to save money over alternative infrastructure choices. Many companies have already proven out the return on investment that OpenStack can provide. For example, TD Bank claims that they experienced a 25% to 40% costs savings on their platforms and virtual machines over their previous solution by deploying OpenStack.

As private cloud solutions like OpenStack become more widely adopted, now is the time for IT to take a hard look at why a private cloud approach may make more sense for some workloads than a wholesale move to a public cloud like Amazon Web Services (AWS). Here are three reasons to consider:

  1. Cost Models: Public cloud based pricing models are generally optimized for development workloads that have a lifespan of months, not years. The public cloud may also be well-suited for workloads that have choppy demand where IT may need the flexibility to scale up and down resources, while those workloads with linear demand may be better served with private cloud. In addition, many organizations find that the network bandwidth costs for public clouds can add up quickly for high-traffic workloads. The specific breakeven point between public cloud and private cloud will vary depending on each environment. However, as IT organizations crunch the numbers for their bandwidth-intensive production workloads, private cloud often comes out on top.
  2. Flexibility: Long-term flexibility may be limited with the public cloud focused strategy. Over the next several years, many companies will look to adopt multi-cloud strategies that include a mix of private cloud and multiple public cloud options to ensure they have the “right cloud” for each of their workloads. It is important to consider how easy it may be in the future to move applications from one cloud to another and how locked in you may be to a specific public cloud. A strategy that is centered around a specific public cloud vendor’s tool stack may limit interoperability with other clouds and limit IT’s ability to move away from certain public cloud offerings as workload demands change. In addition, many IT organizations looking to move out of the public cloud are finding that it can be very costly to move applications and data from one cloud environment to another.
  3. “As-a-Service” Private Clouds: There are ways to get the efficiency benefits of public cloud without having to make the leap. The public cloud does provide a hands-off approach for managing IT resources which lets IT focus on more value-added activities to drive the business. Public cloud also provides operating expense based financing models which can be beneficial for those companies not looking to pay a large up front capital expense for equipment. With this in mind, vendors like Rackspace Hosting–>–> and Mirantis have come to market with solutions that provide private cloud capabilities “as- a-service”. By deploying private cloud as-a-service, IT can deploy their workloads on their premises or at a co-location facility which gives all of the benefits of a private cloud (data sovereignty, security, control) with a public cloud-like consumption model. Service offerings can also include capacity planning, cost monitoring, solution optimization and resource management for the entire product lifecycle. For the right workloads, private cloud as-a-service may cost less than the public cloud. Rackspace also offers an “OpenStack Everywhere” approach which gives IT choices on where they want to deploy OpenStack, whether it be their own on premise datacenter, a third-party datacenter, a colocation facility or a Rackspace datacenter.


IT organizations may want to think twice before making a move to the public cloud as the return on investment for certain workloads may be greater with a private cloud solution like OpenStack. But getting up and running in production on OpenStack isn’t always straightforward. It is only recently that OpenStack has reached a tipping point to be well-equipped for deployment by a broad base of IT organizations. Most IT organizations will want to engage with industry partners who have OpenStack expertise to do it right. Both existing and up-and-coming vendors help fill in the perceived gaps of the upstream code with products that help improve the OpenStack deployment process and the ongoing operational experience. Integrators and service providers are also delivering OpenStack consulting and support expertise to help enterprise users deploy and manage their OpenStack environments.

Read more


​Where OpenStack cloud is today and where it’s going tomorrow

| ZDNet

The future looks bright for OpenStack — according to 451 Research, OpenStack is growing rapidly to become a $5-billion-a-year cloud business. But obstacles still remain.

OpenStack Summit has 5,000-plus people who believe that OpenStack is the future of the cloud. 451 Research thinks they may be onto something, The research company expects revenue from OpenStack business models to exceed $5 billion by 2020 and grow at a 35 percent compound annual growth rate (CAGR).

451 observed that so far OpenStack-based revenue has been overwhelmingly from service providers offering multi-tenant Infrastructure-as-a-Service (IaaS). Looking ahead, though, 451 believes OpenStack’s future success will come from the private cloud space and in providing hybrid-cloud orchestration for public cloud integration. Better still, for OpenStack companies, 451 sees private cloud revenue exceeding public cloud by 2019.

451 Research also predicted that OpenStack will grow across software-defined networking (SDN), network function virtualization (NFV), mobile, and Internet of Things (IoT) for both service providers and enterprises. This is in addition to its existing use cases in big data and lines of business. The keynotes, again, supported this conclusion. Representatives from Huawei, NEC, and Nokia all sang OpenStack’s praises in business and telecom.

“This year OpenStack has become a top priority and credible cloud option, but it still has its shortcomings,” said Al Sadowski, 451 Research’s research VP. For example, while OpenStack is still growing in popularity for enterprises interested in deploying private cloud-native applications, its appeal is limited for legacy applications and for companies that are already comfortable with AWS or Microsoft Azure.

In addition, while several marquee enterprises, such as Wal-Mart, use OpenStack as the central component of cloud transformations, others are still leery of the perceived complexity associated with configuring, deploying, and maintaining OpenStack-based architectures.

They’re not wrong. OpenStack is still difficult to deploy, That’s why companies such as Red Hat, Canonical, HPE, and Mirantis are making a living from OpenStack distributions and OpenStack integration respectively. As ZDNet editor Larry Dignan pointed out in a recent article, systems integrators still have a role to play in the cloud.

Read more


Hyperconverged Infrastructure Is Now A Data Center Mainstay


Hyperconverged infrastructure, where networking, compute, and storage are assembled in a commodity hardware box and virtualized together, is no longer the odd man out. Compared with converged infrastructure — a hardware oriented combination of networking and compute — hyperconverged brings three data center elements together in a virtualized environment.

Hyperconverged infrastructure at one time was criticized as overkill and as handing off too many configuration decisions to a single manufacturer. But IT managers and CIOs have abandoned that critique as more and more hyperconverged units are integrated into the data center with minimal configuration headaches and operational setbacks.

The 451 Research Voice of the Enterprise found that 40% of enterprises now use hyperconverged units as a standard building block in the data center, and analysts expect that number to climb rapidly over the next two years.

For that 40% of users: “74.4% of organizations currently using hyperconverged are using the solutions in their core or central datacenters, signaling this transition,” according to the report.

Christian Perry, research manager at 451 and lead author of the report, wrote that “loyalties to traditional, standalone servers are diminishing in today’s IT ecosystems as managers adopt innovative technologies that eliminate multiple pain points.”

For large enterprises of 10,000 employees or more, 41.3% reported that they were planning to change their IT staff makeup as a result of hyperconvergence. Over a third — 35.5% — of enterprises responded that they had added more virtual machine specialists due to the adoption converged systems.

According to the authors, “This is more than double the number of organizations actively adding specialists in hardware-specific areas” (such as server administrators or storage and network managers).

One area, however, remains surprisingly unchanged.

Containers have yet to make a major appearance in the infrastructure’s makeup, and “remain nascent,” in Perry’s phrase, in data center management. Nearly 51% reported that none of their servers were running containers, while 22.3% told analysts that they are running containers on 10% or fewer of their x86 servers.

The 451 researchers don’t expect those low percentages to last.

IT staffs will eventually take advantage of “their lightweight nature” to further adoption of the DevOps IT model and frequent software updates. But such an adoption will require personnel, perhaps the same virtualization managers, being added to staff at a high rate to manage the technology, the report noted.

VMware for one is attempting to include container management inside its more general, vSphere virtual machine management system.

Read more


Permabit and AHA Partnership Improves Data Center CAPEX and OPEX


Permabit Technology Corporation, leaders in data reduction, and AHA Products Group (AHA) today announced a technology partnership that will enable hyperscale data centers to reduce CAPEX/OPEX, increase performance, increase storage capacity, and extend the life-cycle of their flash memory storage systems.

Many new applications generate petabytes of ephemeral data requiring compression rates of 40 Gbps or more.  In these applications, CPU overhead from data compression can lead to significant reductions in performance and increases in CAPEX and OPEX.  Through this partnership, Permabit’s HIOPS Compression® will enable customers to utilize AHA’s GZIP Compression/Decompression Accelerators to further extend performance and efficiency.

When comparing a 20 core server performing only GZIP compression or decompression, the AHA374 GZIP accelerator simultaneously provides 8X compression and 2X decompression throughput. Compared to LZO/LZ4 data compression, the AHA374 GZIP accelerator increases storage capacity by almost 50%.

“As our 3rd generation data compression product, we’ve focused on high quality hardware and plug-and-play ZLIB/GZIP software libraries. Combined with our first class customer support, integration is straightforward and painless for the end customer,” said Jeff Hannon, VP of AHA Engineering. “We are thrilled to add Permabit’s HIOPS Compression to our list of integrators that includes KX systems and  Velocimetrics, among many others.”

“Permabit and AHA are working together to ensure that VDO, the only complete data reduction solution for the Linux storage stack, works seamlessly with AHA GZIP compression accelerators to maximize compression rates and increase data center density,” said Louis Imershein, VP Product for Permabit and author of the Data Efficiency magazine on Flipboard. “By integrating HIOPS Compression with AHA Accelerators we are continuing to expand Permabit’s VDO software in the Original Design Manufacturer (ODM) market segment beyond our OEM and OS (open-source) implementations. This is another example of the flexibility, breadth and depth of Permabit Albireo data reduction capabilities.”


Read more