Data Efficiency in the News

curata__0834fd2da8ef75099173106b192b06d7.PNG

VDO in 10 Top Data Storage Applications

| InfoStor
curata__0834fd2da8ef75099173106b192b06d7.PNG

There are so many data storage applications out there that whittling down the list to a handful was quite a challenge. In fact, it proved impossible.

So we are doing two stories on this subject. Even then, there are many good candidates that aren’t included. To narrow things down a little, therefore, we omitted back up, disaster recovery (DR), performance tuning, WAN optimization and similar applications. Otherwise, we’d have to cover just about every storage app around.

We also tried to eliminate cloud-based storage services as there are so many of them. But that wasn’t entirely possible because the lines between on-premise and cloud services are blurring as software defined storage encroaches further on the enterprise. As a result, storage services from the likes of Microsoft Azure, Amazon and one or two others are included.

Storage Spaces Direct

Storage Spaces Direct (S2D) for Windows Server 2016 uses a new software storage bus to turn servers with local-attached drives into highly available and scalable software-defined storage. The Microsoft pitch is that this is done at a tiny fraction of the cost of a traditional SAN or NAS. It can be deployed in a converged or hyper-converged architecture to make deployment relatively simple. S2D also includes caching, storage tiering, erasure coding, RDMA networking and the use of NVMe drives mounted directly on the PCIe bus to boost performance.

“S2D allows software-defined storage to manage direct attached storage (SSD and HDD) including allocation, availability, capacity and performance optimization,” said Greg Schulz, an analyst at StorageIO Group. “It is integrated with the Windows Server operating systems, so it is leveraging familiar tools and expertise to support Windows, Hyper-V, SQL Server and other workloads.”

Red Hat Ceph Storage

Red Hat’s data storage application for OpenStack is Red Hat Ceph Storage. It is an open, scalable, software-defined storage system that runs on industry-standard hardware. Designed to manage petabytes of data as well as cloud and emerging workloads, Ceph is integrated with OpenStack to offer a single platform for all its block, object, and file storage needs. Red Hat Ceph Storage is priced based on the amount of storage capacity under management.

“Ceph is a software answer to the traditional storage appliance, and it brings all the benefits of modern software – it’s scale-out, flexible, tunable, and programmable,” said Daniel Gilfix, product marketing, Red Hat Storage. “New workloads are driving businesses towards an increasingly software-defined datacenter. They need greater cost efficiency, more control of data, less time-consuming maintenance, strong data protection and the agility of the cloud.”

Virtual Data Optimizer

Gilfix is also a fan of Virtual Data Optimizer (VDO) software from Permabit. This data efficiency software uses compression, deduplucation and zero-elimination on the data you store, making it take up less space. It runs as a Linux kernel module, sitting underneath almost any software – including Gluster or Ceph. Pricing starts at $199 per node for up to 16 TB of storage. A 256 TB capacity-based license is available for $3,000.

“Just as server virtualization revolutionized the economics of compute, Permabit data reduction software has the potential to transform the economics of storage,” said Gilfix. “VDO software reduces the amount of disk space needed by 2:1 in most scenarios and up to 10:1 in virtual environments (vdisk).”

VMware vSAN

VMware vSAN is a great way to pool internal disks for vSphere environments. It extends virtualization to storage and is fully integrated with vSphere. Policy-based management is also included, so you can set per-VM policies and automate provisioning. Due to its huge partner ecosystem, it supports a wide range of applications, containers, cloud services and more. When combined with VMware NSX, a vSAN-powered software defined data center can extend on-premise storage and management services across different public clouds to give a more consistent experience.

OpenIO

OpenIO is described as all-in-one object storage and data processing. It is available as a software-only solution or via the OpenIO SLS (ServerLess Storage) platform. The software itself is open source and available online. It allows users to operate petabytes of object storage. It wraps storage, data protection and processing in one package that can run on any hardware. OpenIO’s tiering enables automated load-balancing and establishes large data lakes for such applications as analytics.

The SLS version is a storage appliance that combines high-capacity drives, a 40Gb/s Ethernet backend and Marvell Armada-3700 dual core ARM 1.2Ghz processors. It can host up to 96 nodes, each with a 3.5″ HDD or SSD. This offers a petabyte scale-out storage system in a 4U chassis.

StarWind

StarWind Virtual SAN is a virtualization infrastructure targeted at SMBs, remote offices and branch offices, as well as cloud providers. It is said to cut down on the cost of storage virtualization using a technique that mirrors internal hard disks and flash between hypervisor servers. This software-defined storage approach is also designed for ease of use. Getting started requires two licensed nodes and can be expanded beyond that. It comes with asynchronous replication, in-line and offline deduplication, multi-tiered RAM and flash cache.

IBM Spectrum Virtualize

IBM Spectrum Virtualize deals with block-oriented virtual storage. It is available as standalone software or can be used to power IBM all-flash products. The software provides data services such as storage virtualization, thin provisioning, snapshots, cloning, replication, data copying and DR. It makes it possible to virtualize all storage on the same Intel hardware without any additional software or appliances.

“Spectrum Virtualize supports common data services such as snapshots and replication in nearly 400 heterogeneous storage arrays,” said David Hill, Mesabi Group. “It simplifies operational storage management and is available for x86 servers.”

Dell EMC ECS

Dell EMC Elastic Cloud Storage (ECS) is available as a software-defined storage appliance or as software that could be deployed on commodity hardware. This object storage platform provides support for object, file and HDFS. It is said to make app development faster via API accessible storage, and it also enables organizations to consolidate multiple storage systems and content archives into a single, globally accessible content repository that can host many applications.

NetApp ONTAP Cloud

NetApp ONTAP Cloud is a software-only storage service operating on the NetApp ONTAP storage platform that provides NFS, CIFS and iSCSI data management for the cloud. It includes a single interface to all ONTAP-based storage in the cloud and on premises via its Cloud Manager feature. It is also cloud-agnostic, i.e., it is said to offer enterprise-class data storage management across cloud vendors. Thus it aims to combine cloud flexibility with high availability. Business continuity features are also included.

Quantum StorNext

Quantum’s longstanding StorNext software continues to find new avenues of application in the enterprise. StorNext 5 is targeted at the high-performance shared storage market. It is said to accelerate complex information workflows. The StorNext 5 file system can manage Xcellis workflow storage, extended online storage and tape archives via advanced data management capabilities. Billed as the industry’s fastest streaming file system and policy-based tiering software, it is designed for large sets of large files and complex information workflows.

Read more

mainframe

Enterprise storage in 2017: trends and challenges

| Information Age
mainframe

Information Age previews the storage landscape in 2017 – from the technologies that businesses will implement to the new challenges they will face.

The enthusiastic outsourcing to the cloud by enterprise CIOs in 2016 will start to tail off in 2017, as finance directors discover that the high costs are not viable long-term. Board-level management will try to reconcile the alluring simplicity they bought into against the lack of visibility into hardware and operations.

As enterprises attempt to solve the issue of maximising a return for using the cloud, many will realise that the arrangement they are in may not be suitable across the board and seek to bring some of their data back in-house.

It will sink in that using cloud for small data sets can work really well in the enterprise, but as soon as the volume of data grows to a sizeable amount, the outsourced model becomes extremely costly.

Enterprises will extract the most value from their IT infrastructures through hybrid cloud in 2017, keeping a large amount of data on-premise using private cloud and leveraging key aspects of public cloud for distribution, crunching numbers and cloud compute, for example.

‘The combined cost of managing all storage from people, software and full infrastructure is getting very expensive as retention rates on varying storage systems differ,’ says Matt Starr, CTO at Spectra Logic. ‘There is also the added pressure of legislation and compliance as more people want or need to keep everything forever.

‘We predict no significant uptick on storage spend in 2017, and certainly no drastic doubling of spend,’ says Starr. ‘You will see the transition from rotational to flash. Budgets aren’t keeping up with the rates that data is increasing.’

The prospect of a hybrid data centre will, however, trigger more investment eventually. The model is a more efficient capacity tier based on pure object storage at the drive level and above this a combination of high-performance HDD (hard disk drives) and SSD (solid state drives).

Hybrid technology has been used successfully in laptops and desktop computers for years, but it’s only just beginning to be considered for enterprise-scale data centres.

While the industry is in the very early stages of implementing this new method for enterprise, Fagan expects 70% of new data centres to be hybrid by 2020.

‘This is a trend that I expect to briskly pick up pace,’ he says. ‘As the need for faster and more efficient storage becomes more pressing, we must all look to make smart plans for the inevitable data.

One “must have” is data reduction technologies. By applying data reduction to the software stack data density, costs and efficiency will improve.  If Red Hat Linux is part of your strategy, deplpoying Permabit VDO data reduction is as easy as plug in and go. Reducing storage consumption, data center footprint and operating costs will drop by 50% or more.

 

Read more

curata__mv01Wd5tVv1l8G2.png

2016 Review Shows $148 billion Cloud Market Growing at 25% Annually

| News articles, headlines, videos
curata__mv01Wd5tVv1l8G2.png

  New data from Synergy Research Group shows that across six key cloud services and infrastructure market segments, operator and vendor revenues for the four quarters ending September 2016 reached $148 billion, having grown by 25% on an annualized basis. IaaS & PaaS services had the highest growth rate at 53%, followed by hosted private cloud infrastructure services at 35% and enterprise SaaS at 34%. 2016 was notable as the year in which spend on cloud services overtook spend on cloud infrastructure hardware and software. In aggregate cloud service markets are now growing three times more quickly than cloud infrastructure hardware and software. Companies that featured the most prominently among the 2016 market segment leaders were Amazon/AWS, Microsoft, HPE, Cisco, IBM, Salesforce and Dell EMC.

Over the period Q4 2015 to Q3 2016 total spend on hardware and software to build cloud infrastructure exceeded $65 billion, with spend on private clouds accounting for over half of the total but spend on public cloud growing much more rapidly. Investments in infrastructure by cloud service providers helped them to generate almost $30 billion in revenues from cloud infrastructure services (IaaS, PaaS, hosted private cloud services) and over $40 billion from enterprise SaaS, in addition to supporting internet services such as search, social networking, email and e-commerce. UCaaS, while in many ways a different type of market, is also growing steadily and driving some radical changes in business communications.

“We tagged 2015 as the year when cloud became mainstream and I’d say that 2016 is the year that cloud started to dominate many IT market segments,” said Synergy Research Group’s founder and Chief Analyst Jeremy Duke. “Major barriers to cloud adoption are now almost a thing of the past, especially on the public cloud side. Cloud technologies are now generating massive revenues for technology vendors and cloud service providers and yet there are still many years of strong growth ahead.”

One way to improve the density and cost effectiveness of cloud deployments is to include scalable high performance data reduction technologies. If you are using Red Hat Enterprise Linux including Permabit Virtual Data Optimizer (VDO) will drop costs by 50% or more and improve data density too!  

Read more

curata__786f474e7cfcfa6abc931fb2df8fe106.png

Permabit VDO Delivering Performance on Samsung NVMe PCIe SSDs

| storagenewsletter.com
curata__786f474e7cfcfa6abc931fb2df8fe106.png

Permabit Technology Corporation announced that its Virtual Data Optimizer (VDO) software for Linux has exceeded the 8GB/s performance throughput barrier for inline compression.

This was accomplished running on a single Samsung Electronics Co., Ltd. NVMe all-flash reference design node.

The latest version of VDO’s HIOPs compression has been optimized to take advantage of today’s multi-core, multi-processor, scale-out architectures to deliver performance in enterprise storage.

To demonstrate this level of performance, the company combined VDO with Red Hat Ceph Storage software and 24 480GB Samsung PM953 U.2 NVMe PCIe SSDs running on the Samsung NVMe reference design platform. Samsung Electronics is offering U.2 Gen 3 X4 NVMe PCIe SSDs. The PM953 that was used in the testing also features 9W TDP (Total Dissipated Power) and a z-height of 7mm.

The resulting reference architecture delivered single-node performance of over 8GB/s read and 3.6GB/s write performance under workloads generated by Ceph RADOS bench. These results are more than twice as fast as published compression performance numbers by proprietary single node storage arrays and were achieved without the use of hardware acceleration boards.

Today’s data center managers are increasingly turning to architectures built around software-defined storage (SDS) to provide highly scalable solutions that control costs. SDS solutions (such as Red Hat Ceph Storage and Red Hat Gluster Storage) must be able to handle enterprise workloads such as databases, virtual servers and virtual desktops as well as, or better than, the proprietary systems that they are meant to replace. While data compression reduces storage costs, one challenge up until now, has been finding a compression approach that could run at high-end enterprise speeds, on standard hardware in an open Linux environment. HIO/s compression technology, incorporated into VDO, addresses these requirements because it serves as a core service of the OS. Any SDS solution that runs on that OS can then scale out to support petabyte-sized deployments.

 

Read more

curata__72Bb2tTnqExjWmD.jpeg

Permabit VDO Delivers Record-setting Performance on Samsung’s NVMe Reference Design Platform

| Stock Market
curata__72Bb2tTnqExjWmD.jpeg

Permabit VDO Delivers Record-setting Performance on Samsung’s NVMe Reference Design Platform CAMBRIDGE, Mass. , Dec. 21, 2016 /PRNewswire/ — Permabit Technology Corporation , the data reduction experts, announced today that its Virtual Data Optimizer (VDO) software for Linux has exceeded the 8GB/s performance throughput barrier for inline compression. This was accomplished running on a single Samsung NVMe All-Flash Reference Design node.

The latest version of VDO’s HIOPS Compression has been optimized to take advantage of today’s multi-core, multi-processor, scale-out architectures to deliver maximum performance in enterprise storage. To demonstrate this level of performance, Permabit combined VDO with Red Hat Ceph Storage software and 24 480GB Samsung PM953 U.2 NVMe PCIe SSDs (solid state drives) running on the Samsung NVMe Reference Design platform. Samsung Electronics is one of the first companies to offer U.2 Gen 3 X4 NVMe PCIe SSDs. The PM953 that was used in the testing also features nine watts TDP (Total Dissipated Power) and a Z-height of 7mm.

The resulting reference architecture delivered single-node performance of over 8 GB/s read and 3.6GB/s write performance under workloads generated by Ceph RADOS bench. These results are more than twice as fast as published compression performance numbers by proprietary single node storage arrays and were achieved without the use of hardware acceleration boards.

Today’s data center managers are increasingly turning to architectures built around Software-Defined Storage (SDS) to provide highly scalable solutions that control costs. SDS solutions (such as Red Hat Ceph Storage and Red Hat Gluster Storage) must be able to handle enterprise workloads such as databases, virtual servers and virtual desktops as well as, or better than, the proprietary systems that they are meant to replace.  While data compression greatly reduces storage costs, one challenge up until now, has been finding a compression approach that could run at high-end enterprise speeds, on standard hardware in an open Linux environment. HIOPS compression technology, incorporated into VDO, addresses all of these requirements because it serves as a core service of the OS. Any SDS solution that runs on that OS can then scale out to support petabyte-sized deployments.

“Previous systems relied on proprietary hardware acceleration based on ASICs or FPGAs to deliver a similar level of performance. Permabit Labs has demonstrated for the first time that HIOPS compression can be achieved with industry-standard processors and platforms,” said Louis Imershein, VP Product for Permabit Technology Corporation. “We’re looking forward to also leveraging the full multi-node, scale-out capabilities of the Red Hat Ceph storage platform as we test further in 2017.”

 

 

Read more

curata__94268666bb6689de67da501a638be5e2.PNG

Filling the Linux Data Reduction Gap

| Storage Swiss
curata__94268666bb6689de67da501a638be5e2.PNG

Most data centers consider data reduction a “must have” feature for storage. The data reduction software should be able to deliver its capabilities without compromising performance. While many storage systems and a few operating systems now include data reduction as a default, Linux has been curiously absent. While some data reduction solutions exist in the open source community they typically either don’t perform well or don’t provide the level of reduction of proprietary solutions. In short, Linux has a data reduction gap. Permabit is moving rapidly fill this gap by bringing its data reduction solutions to Linux.

About Permabit

Permabit is a leader in the data reduction market. Their software Virtual Data Optimizer (VDO) is used by many traditional storage vendors to speed their time to market with a data reduction capability. The company’s VDO software provides software based deduplication, compression and thin provisioning. It has been used by a variety of vendors ranging from all-flash array suppliers to disk backup solutions. The primary focus of the Permabit solution has always been to provide data reduction with no noticeable impact on performance.

Widespread Linux Support

The first step into Linux for Permabit was to release VDO with support for Red Hat Enterprise Linux. That release was followed up by the additional support of Ubuntu Linux from Canonical. Recently, as Storage Switzerland covered in its briefing note “Open Software Defined Storage needs Data Reduction“, the VDO solution was certified for use with Red Hat CEPH and Gluster. In addition, Linux open source high availability and geo-clustering replication vendor, LINBIT announced that the two companies worked together to ensure that VDO, works with LINBIT’s DRBD to minimize bandwidth requirements for HA and DR.

Enhanced Business Model

Permabit’s traditional go to market strategy with VDO was through Original Equipment Manufacturers (OEMs). Before standardizing on VDO, these OEMs heavily tested the product, making it potentially the most well vetted data reduction solution on the market today. But the solution was only available to OEMs, not directly to end-user data centers and cloud providers. Linux customers wanting Permabit’s data reduction required an easy way to get to the software. As a result Permabit’s most recent announcement is an expansion of their go to market strategy to include direct access to the solution.

StorageSwiss Take

Data reduction is table stakes for the modern storage infrastructure. At the same time Linux based software defined storage solutions are seeing a rapid increase in adoption, even in non-Linux data centers. Permabit’s VDO fills the data reduction gap and now gives these storage solutions a competitive advantage.

Read more

Software Defined Storage Market CAGR of 31.62% during 2016-2020

| Press releases

Software Defined Storage Market report focuses on the major drivers and restraints for the key players. Software Defined Storage Industry research report also provides granular analysis of the market share, segmentation, revenue forecasts and geographic regions of the market.  The Software Defined Storage market research report is a professional and in-depth study on the current state of Software Defined Storage Industry.

Software-defined storage (SDS) is a central part of the software-defined data center (SDDC), which is an emerging information technology (IT). It has been designed for organizations to store business data and to enable the speedy delivery of IT services. In an SDS framework, storage services are delivered as a software layer that can be abstracted from the underlying hardware. SDS solutions eliminate the need for manual configuration of the operational process in the SDDC; these solutions accomplish the objective by according control to the software through the separation of the underlying hardware.

Key Vendors of Software Defined Storage Market:

• EMC

• HP

• IBM

• VMware

Market Drivers of Software Defined Storage:

• Effective management of unstructured data

• For a full, detailed list, view our report

Market Challenge of Software Defined Storage:

• Risk of data privacy and security breach

• For a full, detailed list, view our report

Market Trend of Software Defined Storage:

• Rise of OpenStack

• Emergence of hyper-convergence technology

• Upsurge of open-source SDS for containers

• Increase in cloud computing use

• Emergence of VSAs

SDS continues to evolve and will be further enabled by the adoption of open-stack and open-source for containers.  These will broaden the reach of SDS as well as improve economics.

 

Read more

curata__GXxrEEiJB9Cj0fi.png

IT Priorities 2017: Hybrid cloud set to dominate datacentre infrastructure buying decisions

| ComputerWeekly.com
curata__GXxrEEiJB9Cj0fi.png

The 2017 Computer Weekly/TechTarget IT Priorities poll suggests the next 12 months will see enterprise IT buyers move to increase the hybrid-readiness of their datacentre facilities.

Connecting on-premise datacentre assets to public cloud resources will be a top investment priority for UK and European IT decision makers in 2017, research suggests.

According to the findings of the 2017 Computer Weekly/Tech Target IT Priorities survey, readying their on-premise infrastructure for hybrid cloud has been voted the number one datacentre investment priority by IT decision makers across the continent.

With enterprises increasingly looking to tap into off-premise resources, the hybrid cloud is often seen as a delivery model that will enable them to do that while making the most of their existing datacentre investments.

All of the major cloud providers – including Microsoft, Google and Amazon Web Services (AWS) – have spent a large portion of 2016 setting out their enterprise hybrid cloud strategies for this reason.

According analyst Gartner, 2017 is also likely to see an uptick in enterprises looking to manage public, private and hybrid cloud resources from a multitude of providers, as their digital transformation efforts in this area continue to mature and evolve.

“While public cloud usage will continue to increase, the use of private cloud and hosted private cloud services is also expected to increase at least through 2017,” said Gartner.

“The increased use of multiple public cloud providers, plus growth in various types of private cloud services, will create a multicloud environment in most enterprises and a need to coordinate cloud usage using hybrid scenarios.”

The annual Computer Weekly/TechTarget poll shines a light on the investment plans of European IT managers over the coming year, and more than 1,000 of them (including 322 from the UK) took part in the 2017 survey, with cloud – overall – set to be a keen area of focus for the majority over the course of the next 12 months.

Just over a quarter (27.8%) of respondents said they anticipate their IT budget will either remain the same or increase (38.1%) in 2017, and cloud will be responsible for consuming a growing proportion of their annual spend.

Indeed, cloud services was namechecked in both the UK and European versions of the poll as the number one area IT decision makers expect to see an increase in budget for during 2017.

The results are in line with analyst predictions for the year ahead, with market watcher 451 Research’s latest Voice of the enterprise report stating that enterprises will increase their IT budget spend on cloud from 28% this year to 34% in 2017.

Outside of the infrastructure space, the 2017 IT Priorities survey also revealed that software-as-a-service (SaaS) looks set to be the most highly favoured application deployment model in 2017, with just over half (57%) of UK respondents voting for it. This was the same for the European version, where 46.9% of pollsters voted SaaS.

Hybrid cloud environments were voted the second most popular location for application deployment in both versions of the poll, reinforcing earlier findings from the report that suggest building out capabilities in this area will be a top priority for enterprise IT leaders in 2017.

 

Read more

curata__lqlZXnbuUOXzKCW.jpeg

Who needs traditional storage anymore?

| Gigaom
curata__lqlZXnbuUOXzKCW.jpeg

The traditional enterprise storage market is declining and there are several reasons why. Some of them are easier to identify than others, but one of the most interesting aspects is that there’s a radicalization in workloads, hence storage requirements.

Storage as we know it, SAN or NAS, will become less relevant in the future. We’ve already had a glimpse of it from Hyperconvergence, but this kind of infrastructure is trying to balance all the resources – at the expense of overall efficiency sometimes – and they are more compute-driven than data-driven. Data intensive workloads have different requirements and need different storage solutions.

The Rise of Flash

All-flash systems are gaining in popularity, and are more efficient than hybrid and all-disk counterparts. Inline compression and deduplication, for example, are much more viable on a Flash based system than on others, making it easier to achieve better performance even from the smallest of configurations. This means doing more with less.

At the same time, All-flash allows for a better performance and lower latency and, even more important, the latter is much more consistent and predictable over time.

The Rise of Objects

At the same time, what I’ve always described as “Flash & Trash” is actually happening. Enterprises are implementing large scale capacity-driven storage infrastructures to store all the secondary data. I’m quite fond of object storage, but there are several ways of tackling it and the common denominators are scale-out, software-defined and commodity hardware to get the best $/GB.

Sometimes, your capacity tier could be the cloud (especially for smaller organizations with small amounts of inactive data to store) but the concept is the same, as are the benefits. At the moment the best $/GB is still obtained by Hard Disks (or tapes) but with the rate of advancement in Flash manufacturing, before you know it we’ll be seeing the large SSDs replacing disks in these systems too.

The Next Step

Traditional workloads are served well by this type of two-tier storage infrastructure but it’s not always enough.

The concept of memory-class storage is surfacing more and more often in conversations with end users, and also other CPU-driven techniques are taking the stage. Once again, the problem is getting results faster, before others if you want to improve your competitiveness.

Severs are Storage

Software-defined scale-out storage usually means commodity X86 servers, for HCI is the same and very low latency solutions are heading towards a similar approach. Proprietary hardware can’t compete, it’s too expensive and evolves too slowly compared to the rest of the infrastructure. Yes, niches good for proprietary systems will remain for a long time but this is not where the market is going.

Software is what makes the difference… everywhere now. Innovation and high performance at low cost, is what end users want. Solutions like Permabit do exactly that, making it possible to do more with less but also do much more, and quicker, with the same resources particularly when it is embedded in the storage or in the OS kernel!

Closing the circle

Storage requirements are continuing to diversify and “one-size-fits-all” no longer works (I’ve been saying that for a long time now). Fortunately, commodity x86 servers, flash memory and software are helping to build tailored solutions for everyone at reasonable costs, making high performance infrastructures accessible to a vaster public.

Most modern solutions are built out of servers. Storage, as we traditionally know it, is becoming less of a discrete component and more blended with the rest of the distributed infrastructures with software acting as the glue and making things happen. Examples can be found everywhere – large object storage systems have started implementing “serverless” or analytics features for massive data sets, while CPU intensive and real-time applications can leverage CPU-data vicinity and internal parallelism through a storage layer which can be ephemeral at times… but screaming fast!

 

 

Read more

curata__fAxJs4Iuwd4elLr.png

Why the operating system matters even more in 2017

| Homepage
curata__fAxJs4Iuwd4elLr.png

Operating systems don’t quite date back to the beginning of computing, but they go back far enough. Mainframe customers wrote the first ones in the late 1950s, with operating systems that we’d more clearly recognize as such today—including OS/360 from IBM and Unix from Bell Labs—following over the next couple of decades.

An operating system performs a wide variety of useful functions in a system, but it’s helpful to think of those as falling into three general categories.

First, the operating system sits on top of a physical system and talks to the hardware. This insulates application software from many hardware implementation details. Among other benefits, this provides more freedom to innovate in hardware because it’s the operating system that shoulders most of the burden of supporting new processors and other aspects of the server design—not the application developer. Arguably, hardware innovation will become even more important as machine learning and other key software trends can no longer depend on CMOS process scaling for reliable year-over-year performance increases. With the increasingly widespread adoption of hybrid cloud architectures, the portability provided by this abstraction layer is only becoming more important.

Second, the operating system—specifically the kernel—performs common tasks that applications require. It manages process scheduling, power management, root access permissions, memory allocation, and all the other low-level housekeeping and operational details needed to keep a system running efficiently and securely.

Finally, the operating system serves as the interface to both its own “userland” programs—think system utilities such as logging, performance profiling, and so forth—and applications that a user has written. The operating system should provide a consistent interface for apps through APIs (application programming interface) based on open standards. Furthermore, commercially supported operating systems also bring with them business and technical relationships with third-party application providers, as well as content channels to add other trusted content to the platform.

The computing technology landscape has changed considerably over the past couple of years. This has had the effect of shifting how we think about operating systems and what they do, even as they remain as central as ever. Consider changes in how applications are packaged, the rapid growth of computing infrastructures, and the threat and vulnerability landscape.

Containerization

Applications running in Linux containers are isolated within a single copy of the operating system running on a physical server. This approach stands in contrast to hypervisor-based virtualization in which each application is bound to a complete copy of a guest operating system and communicates with the hardware through the intervening hypervisor. In short, hypervisors virtualize the hardware resources, whereas containers virtualize the operating system resources. As a result, containers consume few system resources, such as memory, and impose essentially no performance overhead on the application.

Scale

Another significant shift is that we increasingly think in terms of computing resources at the scale point of the datacenter rather than the individual server. This transition has been going on since the early days of the web, of course. However, today we’re seeing the reimagining of high-performance computing “grid” technologies both for traditional batch workloads as well as for newer services-oriented styles.

Dovetailing neatly with containers, applications based on loosely coupled “microservices” (running in containers)—with or without persistent storage—are becoming a popular cloud-native approach. This approach, although reminiscent of Service Oriented Architecture (SOA), has demonstrated a more practical and open way to build composite applications. Microservices, through a fine-grained, loosely coupled architecture, allows for an application architecture to reflect the needs of a single well-defined application function. Rapid updates, scalability, and fault tolerance, can all be individually addressed in a composite application, whereas in traditional monolithic apps it’s much more difficult to keep changes to one component from having unintended effects elsewhere.

Security

All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation in a containerized and software-defined infrastructure world than in the case in which dedicated hardware or other software may be handling some of those tasks. Linux has been the beneficiary of a comprehensive toolbox of security-enforcing functionality built using the open source model, including SELinux for mandatory access controls, a wide range of userspace and kernel-hardening features, identity management and access control, and encryption.

Some things change, some don’t

Priorities associated with operating system development and operation have certainly shifted. The focus today is far more about automating deployments at scale than it is about customizing, tuning, and optimizing single servers. At the same time, there’s an increase in both the pace and pervasiveness of threats to a no longer clearly-defined security perimeter—requiring a systematic understanding of the risks and how to mitigate breaches quickly.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, much more robust, and much more lightweight. Their placement, provisioning, and securing must become more automated. But they still need to run on something. Something solid. Something open. Something that’s capable of evolving for new requirements and new types of workloads. And that something is a (Linux) operating system.

Read more