curata__72Bb2tTnqExjWmD.jpeg

Using Data Reduction at the OS layer in Enterprise Linux Environments

| Stock Market
curata__72Bb2tTnqExjWmD.jpeg

Enterprises and cloud service providers that have built their infrastructure around Linux should deploy data reduction in the operating system to drive costs down, say experts at Permabit Technology Corporation, the company behind Permabit Virtual Data Optimizer (VDO).  Permabit VDO is the only complete data reduction software for Linux, the world’s most popular server Operating System (OS). Permabit’s VDO software fills a gap in the Linux feature set by providing a cost effective, alternative to the data reduction services delivered as part of the two other major OS platforms – Microsoft Windows and VMware. IT architects are driven to cut costs as they build out their next generation infrastructure with one or more of these OS platforms in  public and/or private cloud deployments and one obvious way to do so is with data reduction.

When employed as a component of the OS, data reduction can be applied universally without lock-in of proprietary solutions. Adding compression, deduplication, and thin provisioning to the core OS, data reduction benefits can be leveraged by any application or infrastructure services running on that OS. This ensures that savings accrue across the entire IT infrastructure, delivering TCO advantages no matter where the data resides. This is the future of data reduction – as a ubiquitous service of the OS.

“We’re seeing movement away from proprietary storage solutions, where data reduction was a key differentiated feature, toward OS-based capabilities that are applied across an entire infrastructure,” said Tom Cook, Permabit CEO.  “Early adopters are reaping financial rewards through reduced cost of equipment, space, power and cooling. Today we are also seeing adoption of data reduction in the OS by more conservative IT organizations who are driven to take on more initiatives with tightly constrained IT budgets.”

VDO, with inline data deduplication, HIOPS Compression®, and fine-grained thin provisioning, is deployed as a device-mapper driver for Linux. This approach ensures compatibility with a full complement of direct-attached/ephemeral, block, file and object interfaces. VDO data reduction is available for Red Hat Enterprise Linux and Canonical Ubuntu Linux LTS distributions.

Advantages of in-OS data reduction technology include:

  • Improved density for public/private /hybrid cloud storage, resulting in lower storage and service costs
  • Vendor independent to function across hardware running the target OS
  • Seamless data mobility between on-premise and cloud resources
  • Up to six times lower IT infrastructure OpEx
  • Transparent to end users accessing data
  • Requires no modifications to existing applications, file systems, virtualization features, or data protection capabilities

With VDO, these advantages are being realized on Linux today. VDO deployments have been completed (or are currently in progress) with large telecommunications companies, government agencies, financial services firms and IaaS providers who have standardized on Linux for their data centers. With data reduction in Linux, enterprises achieve vendor independence across all Linux based storage, increased mobility of reduced data and hyper scale economics. What an unbeatable combination!

Read more

curata__zqLw2OK4KZLgvSB.png

Busting the handcuffs of traditional data storage

| SiliconANGLE
curata__zqLw2OK4KZLgvSB.png

Premise

The largest and most successful Web companies in the world have proven a new model for managing and scaling a combined architecture of compute and storage. If you’ve heard it once, you’ve heard it a hundred times: “The hyperscale guys don’t use traditional disk arrays.”

Giants such as Facebook Inc. and Google Inc. use a design of local distributed storage to solve massive data problems. The key differentiation of this new architecture is extreme scalability and simplicity of management, enabled by automation. Over the years, Wikibon has referred to this approach as “Software-led Infrastructure,” which is analogous to so-called Software-Defined Storage.

Excluding the most mission-critical online transaction processing markets served by the likes of Oracle Corp. and IBM Corp.’s DB2, it’s becoming clear this software-led approach is poised to penetrate mainstream enterprises because it is more cost-effective and agile than traditional infrastructure. Up until recently, however, such systems have lacked the inherent capabilities needed to service core enterprise apps.

This dynamic is changing rapidly. In particular, Microsoft Corp. with Azure Stack and VMware Inc. with its vSAN architecture are demonstrating momentum with tightly integrated and automated storage services. Linux, with its open source ecosystem, is the remaining contender to challenge VMware and Microsoft for mainstream adoption of on-premises and hybrid information technology infrastructure, including data storage.

Upending the ‘iron triangle’ of arrays

Peter Burris, Wikibon’s head of research, recently conducted research that found IT organizations suffer from an infrastructure “iron triangle” that is constraining IT progress. According to Burris, the triangle comprises entrenched IT administrative functions, legacy vendors and technology-led process automation.

In his research, Burris identified three factors IT organizations must consider to break the triangle:

  • Move from a technology to a service administration model;
  • Adopt True Private Cloud to enhance real automation and protect intellectual property that doesn’t belong in the cloud; and
  • Elevate vendors that don’t force false “platform” decisions, meaning technology vendors have a long history of  “adding value” by renaming and repositioning legacy products under vogue technology marketing umbrellas.

The storage industry suffers from entrenched behaviors as much as any other market segment. Traditional array vendors are trying to leverage the iron triangle to slow the decline of legacy businesses while at the same time ramping up investments in newer technologies, both organically and through acquisition. The Linux ecosystem –the lone force that slowed down Microsoft in the 1990s – continues to challenge these entrenched IT norms and is positioned for continued growth in the enterprise.

But there are headwinds.

In a recent research note published on Wikibon (login required), analyst David Floyer argued there are two main factors contributing to the inertia of traditional storage arrays:

  • The lack of equivalent functionality for storage services in this new software-led world; and
  • The cost of migration of existing enterprise storage arrays – aka the iron triangle.

Linux, Floyer argues, is now ready to grab its fair share of mainstream, on-premises enterprise adoption directly as a result of newer, integrated functionality that is hitting the market. As these software-led models emerge in an attempt to replicate cloud, they inevitably will disrupt traditional approaches just as the public cloud has challenged the dominant networked storage models such as Storage Area Network and Network-Attached Storage that have led the industry for two decades.

Linux is becoming increasingly competitive in this race because it is allowing practitioners to follow the game plan Burris laid out in his research, namely:

1) Building momentum on a services model – (i.e. delivering robust enterprise storage management services that are integrated into the OS);

2) Enabling these services to be invoked by an orchestration/automation framework (e.g., OpenStack, OpenShift) or directly by an application leveraging microservices (i.e., True Private Cloud); and

3) The vendors delivering these capabilities have adopted an open ecosystem approach (i.e. they’re not forcing false platform decisions, rather they’re innovating and integrating into an existing open platform). A scan of the OpenStack Web site gives a glimpse of some of the customers attempting to leverage this approach.

Floyer’s research explores some of the key services required by Linux to challenge for market leadership, with a deeper look at the importance of data reduction as a driver of efficiency and cost reduction for IT organizations.

Types of services

In his research, Floyer cited six classes of storage service that enterprise buyers have expected, which have traditionally been available only within standalone arrays. He posited that these services are changing rapidly, some with the introduction of replacement technologies and others that will increasingly be integrated into the Linux operating system, which will speed adoption. A summary of Floyer’s list of storage services follows:

  • Cache management to overcome slow hard disk drives which are being replaced by flash (with data reduction techniques) to improve performance and facilitate better data sharing
  • Snapshot Management for improved recovery
  • Storage-level Replication is changing due to the effects of flash and high speed interconnects such as 40Gb or 100Gb links. Floyer cited WANdisco’s Paxos technology and the Simplivity (acquired by HPE) advanced file system as technologies supporting this transformation.
  • Encryption, which has traditionally been confined to disk drives, overhead-intensive and leaves data in motion exposed. Encryption has been a fundamental capability within the Linux stack for years and ideally all data would be encrypted. However encryption overheads have historically been too cumbersome. With the advent of graphics processing units and field-programmable gate arrays from firms such as Nvidia Corp., encryption overheads are minimized enabling end-to-end encryption, with the application and database as the focal point for both encryption and decryption, not the disk drive.
  • Quality of Service, which is available in virtually all Linux arrays but typically only sets a floor under which performance may not dip. Traditional approaches for QoS lack granularity to set ceilings (for example) and allow bursting programmatically through a complete and well-defined REST API (to better service the needs of individual applications – versus a one-size-fits all approach). NetApp Inc.’s Solidfire has, from its early days, differentiated in this manner and is a good example of a true software-defined approach that allows provisioning both capacity and performance dynamically through software. Capabilities like this are important to automate the provisioning and management of storage services at scale, a key criterion to replicate public cloud on-prem.
  • Data Reduction – Floyer points out in his research that there are four areas of data reduction that practitioners should understand, including zero suppression, thin provisioning, compression and data de-duplication. Data sharing is a fifth and more nuanced capability that will become important in the future. According to Floyer:

To date… “The most significant shortfall in the Linux stack has been the lack of an integrated data reduction capability, including zero suppression, thin provisioning, de-duplication and compression.”

According to Floyer, “This void has been filled by the recent support of Permabit’s VDO data reduction stack (which includes all the data reduction components) by Red Hat.”

VDO stands for Virtual Data Optimizer. In a recent conversation with Wikibon, Permabit Chief Executive Tom Cook explained that as a Red Hat Technology partner, Permabit obtains early access to Red Hat software, which allows VDO testing and deep integration into the operating system, underscoring Floyer’s argument.

Why is this relevant? The answer is cost.

The cost challenge

Data reduction is a wonky topic to chief information officers, but the reason it’s so important is that despite the falling cost per bit, storage remains a huge expense for buyers, often accounting for between 15 and 50 percent of IT infrastructure capital expenditures. As organizations build open hybrid cloud architectures and attempt to compete with public cloud offerings, Linux storage must not only be functionally robust, it must keep getting dramatically cheaper.

The storage growth curve, which for decades has marched to the cadence of Moore’s Law, is re-shaping and growing at exponential rates. IoT, M2M communications and 5G will only serve to accelerate this trend.

Data reduction services have been a huge tailwind for more expensive flash devices and are fundamental to reducing costs going forward. Traditionally, the common way Linux customers have achieved efficiencies is to acquire data reduction services (e.g., compression and de-dupe) through an array – which may help lower the cost of the array, but it perpetuates the Iron triangle. And longer-term, it hurts the overall cost model.

As underscored in Floyer’s research, the modern approach is to access sets of services that are integrated into the OS and delivered via Linux within an orchestration/automation framework that can manage the workflow. Some cloud service providers (outside of the hyperscale crowd) are sophisticated and have leveraged open-source services to achieve hyperscalelike benefits. Increasingly, these capabilities are coming to established enterprises via the Linux ecosystem and are achieving tighter integration as discussed earlier.

More work to be done

Wikibon community data center practitioners typically cite three primary areas that observers should watch as indicators of Linux maturity generally and software-defined storage specifically:

  1. The importance of orchestration and automation

To truly leverage these services, a management framework is necessary to understand what services have been invoked, to ensure recovery is in place (if needed) and give confidence that software-defined storage and associated services can deliver consistently in a production environment.

Take encryption as an example along with data reduction. To encrypt you must reduce the data before you encrypt because encryption tries to eliminate the very patterns that, for example, data de-duplication is trying to find. This example illustrates the benefits of integrated services. Specifically, if something goes wrong during the process, the system must have deep knowledge of exactly what happened and how to recover. The ideal solution in this example is to have encryption, de-dupe and compression integrated as a set of services embedded in the OS and invoked programmatically by the application where needed and where appropriate.

2. Application performance

Wikibon believes that replicating hyperscalerlike models on-prem will increasingly require integrating data management features into the OS. Technologists in the Wikibon community indicate that the really high performance workloads will move to a software-led environment leveraging emerging non-volatile memory technologies such as NVMe and NVMf. Many believe the highest performance workloads will go into these emerging systems and over time, eliminate what some call the “horrible storage stack” – meaning the overly cumbersome storage protocols that have been forged into the iron triangle for years. This will take time, but the business value effects could be overwhelming with game-changing performance and low latencies as disruptive to storage as high frequency trading has been to Wall Street — ideally without the downside.

3. Organizational issues

As Global 2000 organizations adopt this new software-led approach, there are non-technology-related issues that must be overcome. “People, process and technology” is a bit of a bromide, but we hear it all the time: “Technology is the easy part…. People and process are the difficult ones.” The storage iron triangle will not be easily disassembled. The question remains: Will the economics of open source and business model integrations such as those discussed here overwhelm entrenched processes and the people who own them?

On the surface, open source services are the most likely candidates to replicate hyperscale environments because of the collective pace of innovation and economic advantages. However, to date, a company such as VMware has demonstrated that it can deliver more robust enterprise services faster than the open-source alternatives — but not at hyper-scale.

History is on the side of open source. If the ecosystem can deliver on its cost, scalability and functionality promises, it’s a good bet that the tech gap will close rapidly and economic momentum will follow. Process change and people skills will likely be more challenging.

(Disclosure: Wikibon is a division of SiliconANGLE Media Inc., the publisher of Siliconangle.com. Many of the companies referenced in this post are clients of Wikibon. Please read my Ethics Statement.)

 

Read more

curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Hyper-convergence meets private cloud platform requirements

| searchcloudstorage.techtarget.com
curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Infrastructure choice and integration are fundamental to capitalizing on all that a private cloud environment has to offer your organization. Enterprises looking to benefit from the cloud are often reluctant to deploy business-critical apps and data in the public cloud due to concerns about availability, security and performance. Most IT managers consider a private cloud platform a more comfortable choice, given the superior visibility into and control over IT infrastructure and peace of mind that comes from housing critical assets on the inside.

Application owners are often skeptical about whether a private cloud platform will really provide the increases in business agility promised by vendors, however. In a similar vein, they’re also wary about whether, and over what timeframe, they’ll realize the ROI required to make deploying a fully functional and expensive private cloud platform worthwhile. Meanwhile, most companies aren’t willing or able to build their own private cloud infrastructure due to a lack of skilled resources and the perceived risk involved. So they turn to vendors. Unfortunately, until recently, most vendor offerings provided some but not all the pieces and capabilities required to deploy a fully functional private cloud platform.

For example, basic open source software stacks deliver a private cloud framework that generally includes virtualization, compute, storage and networking components, along with security (identity management and so on), management and orchestration functionality. These layers are loosely integrated at best, however, which means the heavy lifting of integrating and testing components to make them work together is left to the customer (or third-party consultant). Similarly, most vendor-specific products have taken a mix-and-match approach, enabling customers to choose from among different modules or capabilities — again, necessitating integration on the back end.

Consequently, enterprises that want to avoid the large investment of time and money required to build or integrate private cloud stacks are now looking to adopt preintegrated products based on infrastructure platforms designed to support cloud-enabled apps and data. And, as our recent research reveals, these organizations prefer converged and hyper-converged infrastructures (HCIs) to traditional three-tier architectures to host their private cloud environments.

 

Read more

curata__935BBer3ecZAVHP.jpeg

Rackspace Achieves AWS Premier Consulting Partner Status in the AWS Partner Network

| Marketwired
curata__935BBer3ecZAVHP.jpeg

Rackspace® today announced that it has achieved Premier Partner status as a Consulting Partner within the Amazon Web Services® (AWS) Partner Network (APN). This designation is the highest level in the APN, recognizing APN Partners that have made significant investments to develop the technical resources and AWS expertise necessary to deploy and manage customer solutions on the AWS Cloud. Customers can tap into this valuable expertise through Fanatical Support® for AWS architects and engineers, who collectively hold more than 600 AWS professional and associate certifications across the globe.

To qualify for the AWS Premier Consulting Partner tier, Partners must meet requirements that demonstrate the scale of their AWS expertise, capabilities and engagement in the AWS Ecosystem. Rackspace has been an APN Advanced Consulting Partner since the launch of Fanatical Support for AWS in October 2015. Its global base of knowledge spans all five AWS technical certifications, including Solutions Architect Associate, Developer Associate, SysOps Administrator Associate, DevOps Engineer Professional, and Solutions Architect Professional. Rackspace has also demonstrated expertise across different types of AWS workloads and has achieved AWS Competencies for DevOps and Marketing & Commerce.

“We are proud to be recognized as a Premier Consulting Partner in the APN,” said Jeff Cotten, senior vice president of AWS at Rackspace. “Since the launch of Fanatical Support for AWS in late 2015, we have been focused on helping our customers maximize the value of their AWS investments by developing our expertise and capabilities on AWS. Our team has worked so hard to achieve Premier Partner status in such a short time, and we look forward to continuing to build on our ability to provide Fanatical Support for AWS to our customers.”

Read more

curata__PlLxPEsne4935Zs.png

Rackspace Opens First Data Centre In Continental Europe

| Stock Market
curata__PlLxPEsne4935Zs.png

Rackspace® has expanded its investment in continental Europe by announcing that it will open a new data centre in Germany. Rackspace’s decision to expand its operations in the region provides customers with a new option for managed IT infrastructure in the face of strict personal data protection laws across Germany, Austria and Switzerland (DACH) territory.

The facility is designed to serve customers who seek managed private clouds and hosting environments, with a focus on fully managed VMware environments.

Rackspace will be working with one of its long-standing partners to build out the infrastructure of the new operation, which is expected to be fully operational in mid-2017.

To address increasing customer demand for managed cloud services, the plan to open the new facility in Frankfurt follows the recent appointment of Alex Fuerst as the leader of Rackspace operations in the DACH region. Alex has hired an in-region team focused on delivering managed private clouds and hosting environments, as well as managed cloud services for customers seeking help with the complexity and cost of managing AWS and Azure. Globally, Rackspace engineers have more than 500 AWS certifications and proven experience with architecting, deploying, operating and optimising both AWS and Azure customer environments. With this team of managed cloud specialists, Rackspace is poised to serve customers in the region with its expertise and broad portfolio of services for multi-cloud environments.

“I am excited to be able to bring this new Rackspace data centre online to serve our fast-expanding German customer base,” said Fuerst, who joined Rackspace in September 2016 after working in IT leadership roles in the DACH region. “We’re experiencing strong demand from DACH-centric customers, as well as U.S. and EMEA-based multinationals who are looking for managed private clouds and hosting environments, along with managed cloud services and expertise for AWS and Azure in continental Europe. This data centre will strengthen our multi-cloud capabilities on the European continent and pave the way for us to achieve our goal of becoming the leading managed cloud provider in Germany, Switzerland and Austria, which is already our third largest international market.”

“With the opening of our data centre in Germany, we can provide the highest level of availability, security, performance and management, and also help our customers address data protection requirements by providing them with multi-cloud deployment options. As the demand for managed services increases in the German-speaking region, companies of all sizes in all verticals are embracing multi-cloud approaches to IT, so that each of their workloads runs on the platform where it can achieve the best performance and cost efficiency,” Fuerst continued. “More and more of those companies are turning to Rackspace expertise and support for their critical IT services and data.”

With the addition of the data centre in Frankfurt, Rackspace will operate 12 data centres worldwide, including in London, Hong Kong, Sydney, Dallas, Chicago and Ashburn (near Washington, D.C.).

Read more

curata__fMSz69LZH1jqSRL.png

Alternative storage options: Ceph object storage, Swift and more

| VMware information, news and tips
curata__fMSz69LZH1jqSRL.png

Alternative storage options: Ceph object storage, Swift and more by Object storage is rapidly replacing proprietary SAN filers as the go-to choice for storage in the modern data center. But is it right for your virtual environment?

Object storage is changing the data center. Commodity storage offerings provide a well-performing alternative for expensive proprietary SAN filers.

There are currently three different products for object storage dominating the market: the legacy Swift, Amazon Simple Storage Service (S3) and the more recent Ceph object storage offering. Swift is mostly used in OpenStack cloud environments and works with applications that address Swift object storage through direct API calls. That means it’s fairly limited in use: If you have a generic application or OS, there’s no easy way integrate with Swift.

S3 has been around for a long time and works in Amazon cloud environments. Its access methods are limited as well, which means it’s not the best candidate for a generic object storage product. S3 is best used to deploy images in an Amazon Web Services cloud environment. Unfortunately, this isn’t helpful if you’re using VMware vSphere.

Ceph is the most open of all the object storage offerings, not only because it’s open source, but also because it offers several different client interfaces:

API access. This is the most common access model in object storage, but it doesn’t work for VMware environments, as you would need to rewrite the vSphere code to access it.

  • The Ceph file system. This is a special-purpose file system that can be used on the object storage client. Since this object storage client would be an ESXi server, this option also isn’t very usable in VMware environments.
  • The RADOS Block Device. This adds a block device to the client OS by loading a kernel module and integrating it on ESXi; this is also difficult to use in a VMware environment.
  • The new iSCSI interface. This is a new and promising development in Ceph object storage. In the new iSCSI interface, the Ceph storage cluster includes an iSCSI target, which means the client will access it like any other iSCSI-based SAN offering.

Of these four access methods, the iSCSI interface is the only one that really works in a VMware environment. You may be wondering, doesn’t that just replace one SAN product with another? The answer is absolutely not. Even if the client only sees an iSCSI target, you’ll be dealing with a flexible, scalable and affordable SAN offering on the back end, which is much cheaper than traditional SAN environments.

The iSCSI target interface for Ceph object storage is relatively new, and you’ll notice it may not be available on all Ceph object storage products. It is included in Ceph’s SUSE-supported offering, SUSE Enterprise Storage 3, and it is likely that other Ceph vendors, such as Red Hat, will soon follow suit. The iSCSI interface code shows in SUSE first because SUSE is its main developer.

Since Ceph object storage is revolutionizing the world of enterprise storage, it might be a good idea to take the time to explore its possibilities, especially in VMware vSphere environments. Once configured, it will behave just like any other iSCSI data store.

Read more

information-week

Hyperconverged Infrastructure Is Now A Data Center Mainstay

| informationweek.com
information-week

Hyperconverged infrastructure, where networking, compute, and storage are assembled in a commodity hardware box and virtualized together, is no longer the odd man out. Compared with converged infrastructure — a hardware oriented combination of networking and compute — hyperconverged brings three data center elements together in a virtualized environment.

Hyperconverged infrastructure at one time was criticized as overkill and as handing off too many configuration decisions to a single manufacturer. But IT managers and CIOs have abandoned that critique as more and more hyperconverged units are integrated into the data center with minimal configuration headaches and operational setbacks.

The 451 Research Voice of the Enterprise found that 40% of enterprises now use hyperconverged units as a standard building block in the data center, and analysts expect that number to climb rapidly over the next two years.

For that 40% of users: “74.4% of organizations currently using hyperconverged are using the solutions in their core or central datacenters, signaling this transition,” according to the report.

Christian Perry, research manager at 451 and lead author of the report, wrote that “loyalties to traditional, standalone servers are diminishing in today’s IT ecosystems as managers adopt innovative technologies that eliminate multiple pain points.”

For large enterprises of 10,000 employees or more, 41.3% reported that they were planning to change their IT staff makeup as a result of hyperconvergence. Over a third — 35.5% — of enterprises responded that they had added more virtual machine specialists due to the adoption converged systems.

According to the authors, “This is more than double the number of organizations actively adding specialists in hardware-specific areas” (such as server administrators or storage and network managers).

One area, however, remains surprisingly unchanged.

Containers have yet to make a major appearance in the infrastructure’s makeup, and “remain nascent,” in Perry’s phrase, in data center management. Nearly 51% reported that none of their servers were running containers, while 22.3% told analysts that they are running containers on 10% or fewer of their x86 servers.

The 451 researchers don’t expect those low percentages to last.

IT staffs will eventually take advantage of “their lightweight nature” to further adoption of the DevOps IT model and frequent software updates. But such an adoption will require personnel, perhaps the same virtualization managers, being added to staff at a high rate to manage the technology, the report noted.

VMware for one is attempting to include container management inside its more general, vSphere virtual machine management system.

Read more

curata__pPdfh2H3mI0BZ4D.jpeg

VMware Decides to Move Data Center Cloud Over to AWS

| eweek.com
curata__pPdfh2H3mI0BZ4D.jpeg

VMware and Amazon Web Services, major IT players that often aren’t mentioned in the same sentence because many of their products compete in the same markets, revealed a new partnership Oct. 13 that will result in all of VMware’s cloud infrastructure software being hosted on AWS.

For the IT and software development sectors, the deal will mean VMware mainstays such as all its software-defined data center ware—vCenter, NSX, vSphere, VSAN and others—will run on AWS instead of VMware’s own cloud. Like any other cloud deployment, the partnership enables VMware to focus on developing its products and not have to deal with the issues around hosting them, which has never been its primary business.

VMware Cloud on AWS will be run, marketed and supported by VMware, like most typical cloud deployments. However, the service will be integrated with AWS’s own cloud portfolio, which provides computing, databases, analytics and a few different levels of storage, among other features.

VMware Cloud on AWS is a jointly architected service that represents a significant investment in engineering, operations, support and sales resources from both companies. It will run on dedicated AWS infrastructure.

Mark Lohmeyer, VMware’s vice president of products in the Cloud Platform Business Unit, listed the following as the key benefits of the new service:

Best-in-class hybrid cloud capabilities: Features enterprise-class application performance, reliability, availability and security with the VMware technologies optimized to run on AWS.

Operationally consistent with vSphere: With VMware Cloud on AWS, a private data center integrated with the AWS public cloud can be operated using the same vCenter UIs, APIs and CLIs that IT managers already know.

–Seamless integration with AWS services: Virtual machines running in this environment will have access to use AWS’ broad set of cloud-based services, including storage, database, analytics and more.

Seamless workload portability: Full VM compatibility and total workload portability between the data center and the AWS cloud is part of the deal.

Elastically scalable: The service will let users scale capacity according to their needs.  Capacity can be scaled up and down by adding or removing hosts.

No patching or upgrades: The service will remove the burden of managing the software patch, update and upgrade life cycle for the user.

Subscription-based consumption: Customers will be able to purchase dedicated clusters that combine VMware software and AWS infrastructure, either on-demand or as a subscription service.

Read more

curata__Qhb0j5sKkQIYt1s.jpeg

Merged Dell-EMC Targets Hybrid Cloud

| HPCwire
curata__Qhb0j5sKkQIYt1s.jpeg

If bigger is better, the new IT behemoth Dell Technologies Inc. that combines the holdings of Dell and storage leader EMC Corp. fits the bill with the completion a $60 billion merger of cloud, storage, virtualization and hardware components that will seek to be all things to all enterprise IT customers.

“We think scale matters,” Michael Dell asserted Wednesday (Sept. 7) in unveiling the new Dell Technologies that incorporates EMC, VMware and other former EMC and Dell units. A tracking stock for VMware (NYSE: VMW) began trading on Sept. 7, the company said during a call with analysts. (It was trading lower at midday.)

The new company also underscores a shift toward IT industry consolidation as leading players in servers like Dell and storage leaders such as EMC search for synergies to meet enterprise demand for hybrid cloud and cloud native offerings.

The HPC community has been an active participant in the consolidation via one or another mechanism. IBM sold its PC business to Lenovo some time ago. Hewlett Packard Enterprise (HPE), itself the result of icon Hewlett-Packard’s split into two pieces, is in the process of acquiring SGI. One analyst on today’s Dell call noted rumors that HPE plans to take itself private, much as Dell had. Michael Dell declined to comment. Moreover, merger and acquisition speculation has percolated recently around other HPC mainstays. By sales volume, HPE is the leader in HPC sales but Dell has been making inroads.

Dell Technologies will at least initially employ 140,000 workers, making it the largest privately controlled technology in company “in numbers,” according to Tom Sweet, Dell Technologies’ CFO.

Emphasizing a hybrid cloud and cloud native application strategy, Michael Dell said the core Dell-EMC infrastructure solutions unit that includes the former Dell server hardware and EMC storage businesses would operate from “the edge to the core to the cloud.” Meanwhile, other units combined in the merger—including VMware, Virtustream, application developer Pivotal and security units RSA and SecureWorks—would operate under the own names and “can develop their own ecosystems,” Dell said.

The new IT behemoth is betting that its leading rankings in storage, converged platforms and cloud infrastructure position Dell Technologies to compete head-on with the likes of IBM (NYSE: IBM) and Hewlett-Packard Enterprise (NYSE: HPE) as the phrase “digital transformation” transitions from a marketing buzz phrase to reality. Dell and others are targeting enterprise customers searching for new ways to cope with growing data volumes while scaling the delivery of distributed business applications.

Dell Technologies and its rivals also are betting that converged hybrid cloud platforms running cloud native applications represent the future of enterprise IT. Hence, David Goulden, president of the new Dell EMC Infrastructure Solutions Group, said the new unit would likely extend partnerships with public cloud providers as it launches the combined cloud IT infrastructure unit.

Dell’s merger with EMC also underscores the fluid nature of a storage sector as next-generation technologies like all-flash arrays along with object and scale-out storage platforms make inroads in enterprise datacenters. EMC competitors such a scale-out network-attached storage specialist Qumulo Inc. emphasized market unease over the merger, including possible product overlap.

Read more

curata__p9LQXTqs2RHGpVG.png

32% CAGR to 2020 Reaching $7 billion for Software-Defined Storage Market

| storagenewsletter.com
curata__p9LQXTqs2RHGpVG.png

The global software-defined storage (SDS) market 2016-2020 report says rise of OpenStack will be a key trend for market growth as OpenStack open source cloud computing platforms, deployed in the form of Infrastructure as a Service (IaaS), help organizations manage their storage workloads in data centers.

These are designed to control a large pool of storage, compute, and networking resources in data centers through OpenStack APIs. Networking resources are managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

Global SDS market to surpass $7 billion by 2020.

The growth of this market is spurred by the effective management of unstructured data. Analytics solutions, when integrated with SDS solution for big data management, reduce costs and boost business agility. The integration of big data with network file systems and rapid provisioning of analytics applications streamlines the management of unstructured data for business intelligence.

uring 2015, the Americas accounted for around 55% of the overall market share to dominate the global SDS market. The rising demand for innovative IT architecture, fluctuating traffic patterns in networking infrastructure, and the rise of mobility technologies will fuel the growth of the SDS market in the Americas during the forecast period.

The analyst forecast global SDS market to grow at a CAGR of 31.62% during the period 2016-2020. According to the report, one of the key drivers for market growth will be cost reduction and efficiency.

Software-defined technology is poised to disrupt the traditional enterprise IT infrastructure model. Companies are under immense pressure to replace legacy IT infrastructure with innovative models that can cut costs. SDS provides a lean business model and minimizes costs by automating process controls and replacing traditional hardware with software.

Banking, financial services and insurance (BFSI) segment accounted for around 18% of the overall market revenue to become the key revenue generating vertical in the software defined storage market globally. The use of SDS in a BFSI environment gives analysts sufficient time to plan and manage their data to comply with evolving government regulations. SDS also provides storage enhancement through new application offerings. It enables efficient storage allocation through well-defined governing policies and eases the process of access provision through well-defined security policies.

The following companies are the key players in the SDS market: EMC, HP, IBM, and VMware.

Other prominent vendors in the market are 6Wind, Arista Networks, Avaya, Big Switch Networks, Brocade, Cisco, Citrix, DataCore, Dell, Ericsson, Fujitsu, HDS, Juniper Networks, NEC, NetApp, Nexenta, Nutanix, Pertino, Pivot3, Plexxi and SwiftStack.

Read more