Data Efficiency in the News

curata__GXxrEEiJB9Cj0fi.png

IT Priorities 2017: Hybrid cloud set to dominate datacentre infrastructure buying decisions

| ComputerWeekly.com
curata__GXxrEEiJB9Cj0fi.png

The 2017 Computer Weekly/TechTarget IT Priorities poll suggests the next 12 months will see enterprise IT buyers move to increase the hybrid-readiness of their datacentre facilities.

Connecting on-premise datacentre assets to public cloud resources will be a top investment priority for UK and European IT decision makers in 2017, research suggests.

According to the findings of the 2017 Computer Weekly/Tech Target IT Priorities survey, readying their on-premise infrastructure for hybrid cloud has been voted the number one datacentre investment priority by IT decision makers across the continent.

With enterprises increasingly looking to tap into off-premise resources, the hybrid cloud is often seen as a delivery model that will enable them to do that while making the most of their existing datacentre investments.

All of the major cloud providers – including Microsoft, Google and Amazon Web Services (AWS) – have spent a large portion of 2016 setting out their enterprise hybrid cloud strategies for this reason.

According analyst Gartner, 2017 is also likely to see an uptick in enterprises looking to manage public, private and hybrid cloud resources from a multitude of providers, as their digital transformation efforts in this area continue to mature and evolve.

“While public cloud usage will continue to increase, the use of private cloud and hosted private cloud services is also expected to increase at least through 2017,” said Gartner.

“The increased use of multiple public cloud providers, plus growth in various types of private cloud services, will create a multicloud environment in most enterprises and a need to coordinate cloud usage using hybrid scenarios.”

The annual Computer Weekly/TechTarget poll shines a light on the investment plans of European IT managers over the coming year, and more than 1,000 of them (including 322 from the UK) took part in the 2017 survey, with cloud – overall – set to be a keen area of focus for the majority over the course of the next 12 months.

Just over a quarter (27.8%) of respondents said they anticipate their IT budget will either remain the same or increase (38.1%) in 2017, and cloud will be responsible for consuming a growing proportion of their annual spend.

Indeed, cloud services was namechecked in both the UK and European versions of the poll as the number one area IT decision makers expect to see an increase in budget for during 2017.

The results are in line with analyst predictions for the year ahead, with market watcher 451 Research’s latest Voice of the enterprise report stating that enterprises will increase their IT budget spend on cloud from 28% this year to 34% in 2017.

Outside of the infrastructure space, the 2017 IT Priorities survey also revealed that software-as-a-service (SaaS) looks set to be the most highly favoured application deployment model in 2017, with just over half (57%) of UK respondents voting for it. This was the same for the European version, where 46.9% of pollsters voted SaaS.

Hybrid cloud environments were voted the second most popular location for application deployment in both versions of the poll, reinforcing earlier findings from the report that suggest building out capabilities in this area will be a top priority for enterprise IT leaders in 2017.

 

Read more

curata__lqlZXnbuUOXzKCW.jpeg

Who needs traditional storage anymore?

| Gigaom
curata__lqlZXnbuUOXzKCW.jpeg

The traditional enterprise storage market is declining and there are several reasons why. Some of them are easier to identify than others, but one of the most interesting aspects is that there’s a radicalization in workloads, hence storage requirements.

Storage as we know it, SAN or NAS, will become less relevant in the future. We’ve already had a glimpse of it from Hyperconvergence, but this kind of infrastructure is trying to balance all the resources – at the expense of overall efficiency sometimes – and they are more compute-driven than data-driven. Data intensive workloads have different requirements and need different storage solutions.

The Rise of Flash

All-flash systems are gaining in popularity, and are more efficient than hybrid and all-disk counterparts. Inline compression and deduplication, for example, are much more viable on a Flash based system than on others, making it easier to achieve better performance even from the smallest of configurations. This means doing more with less.

At the same time, All-flash allows for a better performance and lower latency and, even more important, the latter is much more consistent and predictable over time.

The Rise of Objects

At the same time, what I’ve always described as “Flash & Trash” is actually happening. Enterprises are implementing large scale capacity-driven storage infrastructures to store all the secondary data. I’m quite fond of object storage, but there are several ways of tackling it and the common denominators are scale-out, software-defined and commodity hardware to get the best $/GB.

Sometimes, your capacity tier could be the cloud (especially for smaller organizations with small amounts of inactive data to store) but the concept is the same, as are the benefits. At the moment the best $/GB is still obtained by Hard Disks (or tapes) but with the rate of advancement in Flash manufacturing, before you know it we’ll be seeing the large SSDs replacing disks in these systems too.

The Next Step

Traditional workloads are served well by this type of two-tier storage infrastructure but it’s not always enough.

The concept of memory-class storage is surfacing more and more often in conversations with end users, and also other CPU-driven techniques are taking the stage. Once again, the problem is getting results faster, before others if you want to improve your competitiveness.

Severs are Storage

Software-defined scale-out storage usually means commodity X86 servers, for HCI is the same and very low latency solutions are heading towards a similar approach. Proprietary hardware can’t compete, it’s too expensive and evolves too slowly compared to the rest of the infrastructure. Yes, niches good for proprietary systems will remain for a long time but this is not where the market is going.

Software is what makes the difference… everywhere now. Innovation and high performance at low cost, is what end users want. Solutions like Permabit do exactly that, making it possible to do more with less but also do much more, and quicker, with the same resources particularly when it is embedded in the storage or in the OS kernel!

Closing the circle

Storage requirements are continuing to diversify and “one-size-fits-all” no longer works (I’ve been saying that for a long time now). Fortunately, commodity x86 servers, flash memory and software are helping to build tailored solutions for everyone at reasonable costs, making high performance infrastructures accessible to a vaster public.

Most modern solutions are built out of servers. Storage, as we traditionally know it, is becoming less of a discrete component and more blended with the rest of the distributed infrastructures with software acting as the glue and making things happen. Examples can be found everywhere – large object storage systems have started implementing “serverless” or analytics features for massive data sets, while CPU intensive and real-time applications can leverage CPU-data vicinity and internal parallelism through a storage layer which can be ephemeral at times… but screaming fast!

 

 

Read more

curata__fAxJs4Iuwd4elLr.png

Why the operating system matters even more in 2017

| Homepage
curata__fAxJs4Iuwd4elLr.png

Operating systems don’t quite date back to the beginning of computing, but they go back far enough. Mainframe customers wrote the first ones in the late 1950s, with operating systems that we’d more clearly recognize as such today—including OS/360 from IBM and Unix from Bell Labs—following over the next couple of decades.

An operating system performs a wide variety of useful functions in a system, but it’s helpful to think of those as falling into three general categories.

First, the operating system sits on top of a physical system and talks to the hardware. This insulates application software from many hardware implementation details. Among other benefits, this provides more freedom to innovate in hardware because it’s the operating system that shoulders most of the burden of supporting new processors and other aspects of the server design—not the application developer. Arguably, hardware innovation will become even more important as machine learning and other key software trends can no longer depend on CMOS process scaling for reliable year-over-year performance increases. With the increasingly widespread adoption of hybrid cloud architectures, the portability provided by this abstraction layer is only becoming more important.

Second, the operating system—specifically the kernel—performs common tasks that applications require. It manages process scheduling, power management, root access permissions, memory allocation, and all the other low-level housekeeping and operational details needed to keep a system running efficiently and securely.

Finally, the operating system serves as the interface to both its own “userland” programs—think system utilities such as logging, performance profiling, and so forth—and applications that a user has written. The operating system should provide a consistent interface for apps through APIs (application programming interface) based on open standards. Furthermore, commercially supported operating systems also bring with them business and technical relationships with third-party application providers, as well as content channels to add other trusted content to the platform.

The computing technology landscape has changed considerably over the past couple of years. This has had the effect of shifting how we think about operating systems and what they do, even as they remain as central as ever. Consider changes in how applications are packaged, the rapid growth of computing infrastructures, and the threat and vulnerability landscape.

Containerization

Applications running in Linux containers are isolated within a single copy of the operating system running on a physical server. This approach stands in contrast to hypervisor-based virtualization in which each application is bound to a complete copy of a guest operating system and communicates with the hardware through the intervening hypervisor. In short, hypervisors virtualize the hardware resources, whereas containers virtualize the operating system resources. As a result, containers consume few system resources, such as memory, and impose essentially no performance overhead on the application.

Scale

Another significant shift is that we increasingly think in terms of computing resources at the scale point of the datacenter rather than the individual server. This transition has been going on since the early days of the web, of course. However, today we’re seeing the reimagining of high-performance computing “grid” technologies both for traditional batch workloads as well as for newer services-oriented styles.

Dovetailing neatly with containers, applications based on loosely coupled “microservices” (running in containers)—with or without persistent storage—are becoming a popular cloud-native approach. This approach, although reminiscent of Service Oriented Architecture (SOA), has demonstrated a more practical and open way to build composite applications. Microservices, through a fine-grained, loosely coupled architecture, allows for an application architecture to reflect the needs of a single well-defined application function. Rapid updates, scalability, and fault tolerance, can all be individually addressed in a composite application, whereas in traditional monolithic apps it’s much more difficult to keep changes to one component from having unintended effects elsewhere.

Security

All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation in a containerized and software-defined infrastructure world than in the case in which dedicated hardware or other software may be handling some of those tasks. Linux has been the beneficiary of a comprehensive toolbox of security-enforcing functionality built using the open source model, including SELinux for mandatory access controls, a wide range of userspace and kernel-hardening features, identity management and access control, and encryption.

Some things change, some don’t

Priorities associated with operating system development and operation have certainly shifted. The focus today is far more about automating deployments at scale than it is about customizing, tuning, and optimizing single servers. At the same time, there’s an increase in both the pace and pervasiveness of threats to a no longer clearly-defined security perimeter—requiring a systematic understanding of the risks and how to mitigate breaches quickly.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, much more robust, and much more lightweight. Their placement, provisioning, and securing must become more automated. But they still need to run on something. Something solid. Something open. Something that’s capable of evolving for new requirements and new types of workloads. And that something is a (Linux) operating system.

Read more

curata__0lq7FVrFBRdgESp.jpeg

Permabit Hits New Milestone in 2016 by Delivering the First Complete Data Reduction for Linux

| PR Newswire
curata__0lq7FVrFBRdgESp.jpeg

Permabit Technology Corporation, the data reduction experts, brought complete storage efficiency to Linux in 2016.  The company’s Virtual Data Optimizer (VDO) software now delivers advanced deduplication, HIOPS Compression® and fine-grained thin provisioning directly to data centers as they struggle to address the storage density challenges driven by the growth of Big Data and widespread cloud adoption. VDO’s modular, operating system-centric approach addresses the widest possible range of use cases from OLTP to backup workloads, in single system or distributed environments.

Chronologically in 2016, Permabit announced:

  • Availability of VDO for Data Centers on Red Hat Enterprise Linux
  • A partnership with LINBIT delivering bandwidth optimized offsite replication
  • Support for VDO for Ubuntu Linux from Canonical
  • A partnership with AHA Products Group to support development of advanced data compression solutions for Linux
  • Permabit partnership with Red Hat and qualification of VDO with Red Hat Ceph Storage and Red Hat Gluster Storage
  • A reseller agreement with Permabit Integration Partner CalSoft Pvt. Ltd.

“General purpose data reduction has vast potential to control data center costs in the Linux ecosystem,” noted Lynn Renshaw of Peerless Business Intelligence. “In 2016, a single square foot of high density data center space costs up to $3,000 to build and consumes up to 11.4kW hours of power per year at an average cost of $1,357 per square foot in the USA.  Roughly 1/3rd of all data center space is consumed by storage. Left unchecked, IDC estimates worldwide data center space will total 1.94 billion square feet by 2020.  Data reduction (such as Permabit VDO) can reduce data center storage footprints as much as 75%.  This technology alone could save $1.5 Trillion dollars in future data center build out costs.”

Permabit generated great excitement in 2016 with VDO’s ability to lower the TCO of software defined storage (SDS) solutions.  After witnessing the success of public cloud open source storage, many private data center operations teams began implementing software-based storage in 2016.  Following the public cloud providers, private data centers embraced the huge economic advantages of vendor neutrality, hardware independence, and increased utilization that comes from SDS, while still customizing for their own unique business requirements.  VDO’s modular, operating system-centric approach is deployed seamlessly which is why major private data centers wrapped up successful evaluations of VDO within SDS solutions.

According to Permabit CEO and President Tom Cook, “Dramatic changes in the storage industry over the past year have resulted in Permabit expanding from our traditional OEM-only business model, to one positioned to address today’s software-defined data center requirements.  As we looked at worldwide deployments of software defined infrastructure, we realized that Linux is at the center of innovation in the space.  Because of this, as the only complete Linux data reduction solution, VDO is uniquely positioned to radically alter storage economics, drastically reducing TCO.  With our expanded business model, immediate benefits can be realized across today’s Linux-based software-defined environments.”

Read more

curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Hyper-convergence meets private cloud platform requirements

| searchcloudstorage.techtarget.com
curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Infrastructure choice and integration are fundamental to capitalizing on all that a private cloud environment has to offer your organization. Enterprises looking to benefit from the cloud are often reluctant to deploy business-critical apps and data in the public cloud due to concerns about availability, security and performance. Most IT managers consider a private cloud platform a more comfortable choice, given the superior visibility into and control over IT infrastructure and peace of mind that comes from housing critical assets on the inside.

Application owners are often skeptical about whether a private cloud platform will really provide the increases in business agility promised by vendors, however. In a similar vein, they’re also wary about whether, and over what timeframe, they’ll realize the ROI required to make deploying a fully functional and expensive private cloud platform worthwhile. Meanwhile, most companies aren’t willing or able to build their own private cloud infrastructure due to a lack of skilled resources and the perceived risk involved. So they turn to vendors. Unfortunately, until recently, most vendor offerings provided some but not all the pieces and capabilities required to deploy a fully functional private cloud platform.

For example, basic open source software stacks deliver a private cloud framework that generally includes virtualization, compute, storage and networking components, along with security (identity management and so on), management and orchestration functionality. These layers are loosely integrated at best, however, which means the heavy lifting of integrating and testing components to make them work together is left to the customer (or third-party consultant). Similarly, most vendor-specific products have taken a mix-and-match approach, enabling customers to choose from among different modules or capabilities — again, necessitating integration on the back end.

Consequently, enterprises that want to avoid the large investment of time and money required to build or integrate private cloud stacks are now looking to adopt preintegrated products based on infrastructure platforms designed to support cloud-enabled apps and data. And, as our recent research reveals, these organizations prefer converged and hyper-converged infrastructures (HCIs) to traditional three-tier architectures to host their private cloud environments.

 

Read more

curata__RBqgW7yFNqS6E6e.png

More than 50 Percent of Businesses Not Leveraging Public Cloud

| tmcnet.com
curata__RBqgW7yFNqS6E6e.png

While more than 50 percent of respondents are not currently leveraging public cloud, 80 percent plan on migrating more within the next year, according to a new study conducted by TriCore Solutions, the application management experts. As new streams of data are continuing to appear, from mobile apps to artificial intelligence, companies in the future will rely heavily on cloud and digital transformation to minimize complexity.

Here are some key results from the survey:

  • Public Cloud Considerations: Cloud initiatives are underway for companies in the mid-market up through the Fortune 500, though IT leaders continue to struggle with what to migrate, when to migrate, and how best to execute the process. More than half of those surveyed plan to migrate web applications and development systems to the public cloud in the next year, prioritizing these over other migrations. More than two thirds have 25 percent or less of their infrastructure in the public cloud, showing that public cloud still has far to go before it becomes the prevailing environment that IT leaders must manage. With increasingly complex hybrid environments, managed service providers will become a more important resource to help facilitate the process.
  • Running Smarter Still on Prem: Whether running on Oacle EBS, PeopleSoft or, companies rely on ERP systems to run their businesses. Only 20 percent of respondents expect to migrate ERP systems to public cloud in the next year, indicating the importance of hybrid cloud environments for companies to manage business-critical operations on premise alongside other applications and platforms in the public cloud.
  • Prepping for Digital Transformation: With the increased amount of data in today’s IT environment – from machine data to social media data to transactional data and everything in between – the need for managed service providers to make sense of it all has never been more important. 53 percent of respondents plan on outsourcing their IT infrastructure in the future, and respondents anticipate a nearly 20 percent increase in applications being outsourced in the future, as well.

As worldwide spending on cloud continues to grow, and with the increased amount of data in today’s IT environment, IT leaders need to heavily consider the keys to IT success when migrating to a cloud-based environment. Understanding how to help businesses unlock and leverage the endless data available to them, will drive IT success for managed service providers in 2017 and beyond.

 

Read more

curata__d1NS2937h9ezbqI.gif

Worldwide Enterprise Storage Market Sees Modest Decline in Third Quarter, According to IDC

| idc.com
curata__d1NS2937h9ezbqI.gif

Total worldwide enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in the third quarter of 2016 (3Q16), according to the International Data Corporation (IDC) Worldwide Quarterly Enterprise Storage Systems Tracker. Total capacity shipments were up 33.2% year over year to 44.3 exabytes during the quarter. Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion. Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

“The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, Storage Systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more

curata__cO66GSq35duhcob.png

In-memory de-duplication technology to accelerate response for large-scale storage

| Phys.org
curata__cO66GSq35duhcob.png

Fujitsu Laboratories Ltd. today announced the development of a high-speed in-memory data deduplication technology for all-flash arrays, which are large-scale, high-speed storage systems and use multiple flash devices such as solid-state drives. This technology enables the production of storage systems with up to twice the response speed when writing data, compared to previous methods.

In recent years, all-flash arrays have incorporated deduplication technology that consolidates duplicate data into one to write to a flash device, in order to utilize the limited capacity of flash devices. However, as the system connects to multiple flash devices through a network in order to search for duplicate data each time it writes data, and storage devices grow in capacity and increase in speed, a problem of lowered response speed during write operations arises.
Fujitsu Laboratories has developed a new method that can accelerate response speeds by executing deduplication after writing data. In addition, as data may be written to memory twice in some cases when processing is continued with the new method, thereby increasing communications volume and lowering overall processing performance, Fujitsu Laboratories has developed technology to automatically switch between the new method and the previous method, as operational conditions require.

This means that response speeds can be increased by up to two times, improving the response of virtual desktop services and reducing database processing times.

 

Using the newly developed technology, Fujitsu Laboratories was able to achieve a lowest latency of about half that of previous in fio benchmarks. As a result, Fujitsu Laboratories was able to increase speed when writing data to an all-flash array by up to two times For example, in applications such as virtual desktops and data base processing that require high write speeds, as accessing small files occurs in enormous volumes, there are many duplications. In such situations, user applications on the service could be sped up, improving the user experience. In addition, by applying this system to back-end storage for operations databases, operations systems could be sped up, enabling further consolidation of IT infrastructure.

Fujitsu Laboratories will continue development of technologies to further accelerate all-flash arrays going forward, aiming to incorporate them into Fujitsu Limited’s storage products from fiscal 2017 or later.

 

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

WW Enterprise Storage Market Down 3% in 3Q16 From 3Q15

| storagenewsletter.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Total WW enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in 3Q16, according to the IDC Worldwide Quarterly Enterprise Storage Systems Tracker.

Total capacity shipments were up 33.2% year over year to 44.3 EBs during the quarter.

Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion.

Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, storage systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more

curata__AJLNA97OQToCJN4.png

Digitally Advanced Traditional Enterprises Are Eight Times More Likely to Grow Share

| Stock Market
curata__AJLNA97OQToCJN4.png

Bain & Company and Red Hat (NYSE: RHT), the world’s leading provider of open source solutions,today released the results of joint research aimed at determining how deeply enterprises are committed to digital transformation and the benefits these enterprises are seeing. The research report, For Traditional Enterprises, the Path to Digital and the Role of Containers, surveyed nearly 450 U.S. executives, IT leaders and IT personnel across industries and found that businesses that recognize the potential for digital disruption are looking to new digital technologies – such as cloud computing and modern app development – to increase agility and deliver new services to customers while reducing costs. Yet, strategies and investments in digital transformation are still in their earliest stages.

For those survey respondents that have invested in digital, the technology and business results are compelling. Bain and Red Hat’s research demonstrates that those using new technologies to digitally transform their business experienced:

  • Increased market share. These enterprises are eight times more likely to have grown their market share, compared to those in the earliest stages of digital transformation.
  • Delivery of better products in a more timely fashion through increased adoption of emerging technologies – as much as three times faster than those in the earlier stages of digital transformation.
  • More streamlined development processes, more flexible infrastructure, faster time to market and reduced costs by using containers for application development.

Despite the hype, however, even the most advanced traditional enterprises surveyed still score well below start-ups and emerging enterprises that have embraced new technologies from inception (digital natives). According to the survey results, nearly 80 percent of traditional enterprises score below 65 on a 100-point scale that assesses how these organizations believe they are aligning digital technologies to achieve business outcomes. Ultimately, the report reveals that the degree of progress among respondents moving towards digital transformation varies widely, driven in part by business contexts, actual IT needs and overall attitudes towards technology. It also uncovers some common themes in the research.

As companies progress on their digital adoption journey, they typically invest in increasingly more sophisticated capabilities in support of their technology and business goals. The use of modern application and deployment platforms represents the next wave of digital maturity and is proving to be key in helping companies address their legacy applications and infrastructure.

Containers are one of the most high-profile of these development platforms and a technology that is helping to drive digital transformation within the enterprise. Containers are self-contained environments that allow users to package and isolate applications with their entire runtime dependencies – all of the files necessary to run on clustered, scale-out infrastructure. These capabilities make containers portable across many different environments, including public and private clouds.

While the opportunities created by these emerging technologies are compelling, the speed and path of adoption for containers is somewhat less apparent, according to the Bain and Red Hat report. The biggest hurdles standing in the way of widespread container use according to respondents are common among early stage technologies – lack of familiarity, talent gaps, hesitation to move from existing technology and immature ecosystems – and can often be overcome in time. Vendors are making progress to address more container-specific challenges, such as management tools, applicability across workloads, security and persistent storage, indicating decreasing barriers to adoption.

Read more