curata__zqLw2OK4KZLgvSB.png

Busting the handcuffs of traditional data storage

| SiliconANGLE
curata__zqLw2OK4KZLgvSB.png

Premise

The largest and most successful Web companies in the world have proven a new model for managing and scaling a combined architecture of compute and storage. If you’ve heard it once, you’ve heard it a hundred times: “The hyperscale guys don’t use traditional disk arrays.”

Giants such as Facebook Inc. and Google Inc. use a design of local distributed storage to solve massive data problems. The key differentiation of this new architecture is extreme scalability and simplicity of management, enabled by automation. Over the years, Wikibon has referred to this approach as “Software-led Infrastructure,” which is analogous to so-called Software-Defined Storage.

Excluding the most mission-critical online transaction processing markets served by the likes of Oracle Corp. and IBM Corp.’s DB2, it’s becoming clear this software-led approach is poised to penetrate mainstream enterprises because it is more cost-effective and agile than traditional infrastructure. Up until recently, however, such systems have lacked the inherent capabilities needed to service core enterprise apps.

This dynamic is changing rapidly. In particular, Microsoft Corp. with Azure Stack and VMware Inc. with its vSAN architecture are demonstrating momentum with tightly integrated and automated storage services. Linux, with its open source ecosystem, is the remaining contender to challenge VMware and Microsoft for mainstream adoption of on-premises and hybrid information technology infrastructure, including data storage.

Upending the ‘iron triangle’ of arrays

Peter Burris, Wikibon’s head of research, recently conducted research that found IT organizations suffer from an infrastructure “iron triangle” that is constraining IT progress. According to Burris, the triangle comprises entrenched IT administrative functions, legacy vendors and technology-led process automation.

In his research, Burris identified three factors IT organizations must consider to break the triangle:

  • Move from a technology to a service administration model;
  • Adopt True Private Cloud to enhance real automation and protect intellectual property that doesn’t belong in the cloud; and
  • Elevate vendors that don’t force false “platform” decisions, meaning technology vendors have a long history of  “adding value” by renaming and repositioning legacy products under vogue technology marketing umbrellas.

The storage industry suffers from entrenched behaviors as much as any other market segment. Traditional array vendors are trying to leverage the iron triangle to slow the decline of legacy businesses while at the same time ramping up investments in newer technologies, both organically and through acquisition. The Linux ecosystem –the lone force that slowed down Microsoft in the 1990s – continues to challenge these entrenched IT norms and is positioned for continued growth in the enterprise.

But there are headwinds.

In a recent research note published on Wikibon (login required), analyst David Floyer argued there are two main factors contributing to the inertia of traditional storage arrays:

  • The lack of equivalent functionality for storage services in this new software-led world; and
  • The cost of migration of existing enterprise storage arrays – aka the iron triangle.

Linux, Floyer argues, is now ready to grab its fair share of mainstream, on-premises enterprise adoption directly as a result of newer, integrated functionality that is hitting the market. As these software-led models emerge in an attempt to replicate cloud, they inevitably will disrupt traditional approaches just as the public cloud has challenged the dominant networked storage models such as Storage Area Network and Network-Attached Storage that have led the industry for two decades.

Linux is becoming increasingly competitive in this race because it is allowing practitioners to follow the game plan Burris laid out in his research, namely:

1) Building momentum on a services model – (i.e. delivering robust enterprise storage management services that are integrated into the OS);

2) Enabling these services to be invoked by an orchestration/automation framework (e.g., OpenStack, OpenShift) or directly by an application leveraging microservices (i.e., True Private Cloud); and

3) The vendors delivering these capabilities have adopted an open ecosystem approach (i.e. they’re not forcing false platform decisions, rather they’re innovating and integrating into an existing open platform). A scan of the OpenStack Web site gives a glimpse of some of the customers attempting to leverage this approach.

Floyer’s research explores some of the key services required by Linux to challenge for market leadership, with a deeper look at the importance of data reduction as a driver of efficiency and cost reduction for IT organizations.

Types of services

In his research, Floyer cited six classes of storage service that enterprise buyers have expected, which have traditionally been available only within standalone arrays. He posited that these services are changing rapidly, some with the introduction of replacement technologies and others that will increasingly be integrated into the Linux operating system, which will speed adoption. A summary of Floyer’s list of storage services follows:

  • Cache management to overcome slow hard disk drives which are being replaced by flash (with data reduction techniques) to improve performance and facilitate better data sharing
  • Snapshot Management for improved recovery
  • Storage-level Replication is changing due to the effects of flash and high speed interconnects such as 40Gb or 100Gb links. Floyer cited WANdisco’s Paxos technology and the Simplivity (acquired by HPE) advanced file system as technologies supporting this transformation.
  • Encryption, which has traditionally been confined to disk drives, overhead-intensive and leaves data in motion exposed. Encryption has been a fundamental capability within the Linux stack for years and ideally all data would be encrypted. However encryption overheads have historically been too cumbersome. With the advent of graphics processing units and field-programmable gate arrays from firms such as Nvidia Corp., encryption overheads are minimized enabling end-to-end encryption, with the application and database as the focal point for both encryption and decryption, not the disk drive.
  • Quality of Service, which is available in virtually all Linux arrays but typically only sets a floor under which performance may not dip. Traditional approaches for QoS lack granularity to set ceilings (for example) and allow bursting programmatically through a complete and well-defined REST API (to better service the needs of individual applications – versus a one-size-fits all approach). NetApp Inc.’s Solidfire has, from its early days, differentiated in this manner and is a good example of a true software-defined approach that allows provisioning both capacity and performance dynamically through software. Capabilities like this are important to automate the provisioning and management of storage services at scale, a key criterion to replicate public cloud on-prem.
  • Data Reduction – Floyer points out in his research that there are four areas of data reduction that practitioners should understand, including zero suppression, thin provisioning, compression and data de-duplication. Data sharing is a fifth and more nuanced capability that will become important in the future. According to Floyer:

To date… “The most significant shortfall in the Linux stack has been the lack of an integrated data reduction capability, including zero suppression, thin provisioning, de-duplication and compression.”

According to Floyer, “This void has been filled by the recent support of Permabit’s VDO data reduction stack (which includes all the data reduction components) by Red Hat.”

VDO stands for Virtual Data Optimizer. In a recent conversation with Wikibon, Permabit Chief Executive Tom Cook explained that as a Red Hat Technology partner, Permabit obtains early access to Red Hat software, which allows VDO testing and deep integration into the operating system, underscoring Floyer’s argument.

Why is this relevant? The answer is cost.

The cost challenge

Data reduction is a wonky topic to chief information officers, but the reason it’s so important is that despite the falling cost per bit, storage remains a huge expense for buyers, often accounting for between 15 and 50 percent of IT infrastructure capital expenditures. As organizations build open hybrid cloud architectures and attempt to compete with public cloud offerings, Linux storage must not only be functionally robust, it must keep getting dramatically cheaper.

The storage growth curve, which for decades has marched to the cadence of Moore’s Law, is re-shaping and growing at exponential rates. IoT, M2M communications and 5G will only serve to accelerate this trend.

Data reduction services have been a huge tailwind for more expensive flash devices and are fundamental to reducing costs going forward. Traditionally, the common way Linux customers have achieved efficiencies is to acquire data reduction services (e.g., compression and de-dupe) through an array – which may help lower the cost of the array, but it perpetuates the Iron triangle. And longer-term, it hurts the overall cost model.

As underscored in Floyer’s research, the modern approach is to access sets of services that are integrated into the OS and delivered via Linux within an orchestration/automation framework that can manage the workflow. Some cloud service providers (outside of the hyperscale crowd) are sophisticated and have leveraged open-source services to achieve hyperscalelike benefits. Increasingly, these capabilities are coming to established enterprises via the Linux ecosystem and are achieving tighter integration as discussed earlier.

More work to be done

Wikibon community data center practitioners typically cite three primary areas that observers should watch as indicators of Linux maturity generally and software-defined storage specifically:

  1. The importance of orchestration and automation

To truly leverage these services, a management framework is necessary to understand what services have been invoked, to ensure recovery is in place (if needed) and give confidence that software-defined storage and associated services can deliver consistently in a production environment.

Take encryption as an example along with data reduction. To encrypt you must reduce the data before you encrypt because encryption tries to eliminate the very patterns that, for example, data de-duplication is trying to find. This example illustrates the benefits of integrated services. Specifically, if something goes wrong during the process, the system must have deep knowledge of exactly what happened and how to recover. The ideal solution in this example is to have encryption, de-dupe and compression integrated as a set of services embedded in the OS and invoked programmatically by the application where needed and where appropriate.

2. Application performance

Wikibon believes that replicating hyperscalerlike models on-prem will increasingly require integrating data management features into the OS. Technologists in the Wikibon community indicate that the really high performance workloads will move to a software-led environment leveraging emerging non-volatile memory technologies such as NVMe and NVMf. Many believe the highest performance workloads will go into these emerging systems and over time, eliminate what some call the “horrible storage stack” – meaning the overly cumbersome storage protocols that have been forged into the iron triangle for years. This will take time, but the business value effects could be overwhelming with game-changing performance and low latencies as disruptive to storage as high frequency trading has been to Wall Street — ideally without the downside.

3. Organizational issues

As Global 2000 organizations adopt this new software-led approach, there are non-technology-related issues that must be overcome. “People, process and technology” is a bit of a bromide, but we hear it all the time: “Technology is the easy part…. People and process are the difficult ones.” The storage iron triangle will not be easily disassembled. The question remains: Will the economics of open source and business model integrations such as those discussed here overwhelm entrenched processes and the people who own them?

On the surface, open source services are the most likely candidates to replicate hyperscale environments because of the collective pace of innovation and economic advantages. However, to date, a company such as VMware has demonstrated that it can deliver more robust enterprise services faster than the open-source alternatives — but not at hyper-scale.

History is on the side of open source. If the ecosystem can deliver on its cost, scalability and functionality promises, it’s a good bet that the tech gap will close rapidly and economic momentum will follow. Process change and people skills will likely be more challenging.

(Disclosure: Wikibon is a division of SiliconANGLE Media Inc., the publisher of Siliconangle.com. Many of the companies referenced in this post are clients of Wikibon. Please read my Ethics Statement.)

 

Read more

curata__mv01Wd5tVv1l8G2.png

2016 Review Shows $148 billion Cloud Market Growing at 25% Annually

| News articles, headlines, videos
curata__mv01Wd5tVv1l8G2.png

  New data from Synergy Research Group shows that across six key cloud services and infrastructure market segments, operator and vendor revenues for the four quarters ending September 2016 reached $148 billion, having grown by 25% on an annualized basis. IaaS & PaaS services had the highest growth rate at 53%, followed by hosted private cloud infrastructure services at 35% and enterprise SaaS at 34%. 2016 was notable as the year in which spend on cloud services overtook spend on cloud infrastructure hardware and software. In aggregate cloud service markets are now growing three times more quickly than cloud infrastructure hardware and software. Companies that featured the most prominently among the 2016 market segment leaders were Amazon/AWS, Microsoft, HPE, Cisco, IBM, Salesforce and Dell EMC.

Over the period Q4 2015 to Q3 2016 total spend on hardware and software to build cloud infrastructure exceeded $65 billion, with spend on private clouds accounting for over half of the total but spend on public cloud growing much more rapidly. Investments in infrastructure by cloud service providers helped them to generate almost $30 billion in revenues from cloud infrastructure services (IaaS, PaaS, hosted private cloud services) and over $40 billion from enterprise SaaS, in addition to supporting internet services such as search, social networking, email and e-commerce. UCaaS, while in many ways a different type of market, is also growing steadily and driving some radical changes in business communications.

“We tagged 2015 as the year when cloud became mainstream and I’d say that 2016 is the year that cloud started to dominate many IT market segments,” said Synergy Research Group’s founder and Chief Analyst Jeremy Duke. “Major barriers to cloud adoption are now almost a thing of the past, especially on the public cloud side. Cloud technologies are now generating massive revenues for technology vendors and cloud service providers and yet there are still many years of strong growth ahead.”

One way to improve the density and cost effectiveness of cloud deployments is to include scalable high performance data reduction technologies. If you are using Red Hat Enterprise Linux including Permabit Virtual Data Optimizer (VDO) will drop costs by 50% or more and improve data density too!  

Read more

curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Hyper-convergence meets private cloud platform requirements

| searchcloudstorage.techtarget.com
curata__06169c239980d6fe0b7aaddfbfa4ef2a.PNG

Infrastructure choice and integration are fundamental to capitalizing on all that a private cloud environment has to offer your organization. Enterprises looking to benefit from the cloud are often reluctant to deploy business-critical apps and data in the public cloud due to concerns about availability, security and performance. Most IT managers consider a private cloud platform a more comfortable choice, given the superior visibility into and control over IT infrastructure and peace of mind that comes from housing critical assets on the inside.

Application owners are often skeptical about whether a private cloud platform will really provide the increases in business agility promised by vendors, however. In a similar vein, they’re also wary about whether, and over what timeframe, they’ll realize the ROI required to make deploying a fully functional and expensive private cloud platform worthwhile. Meanwhile, most companies aren’t willing or able to build their own private cloud infrastructure due to a lack of skilled resources and the perceived risk involved. So they turn to vendors. Unfortunately, until recently, most vendor offerings provided some but not all the pieces and capabilities required to deploy a fully functional private cloud platform.

For example, basic open source software stacks deliver a private cloud framework that generally includes virtualization, compute, storage and networking components, along with security (identity management and so on), management and orchestration functionality. These layers are loosely integrated at best, however, which means the heavy lifting of integrating and testing components to make them work together is left to the customer (or third-party consultant). Similarly, most vendor-specific products have taken a mix-and-match approach, enabling customers to choose from among different modules or capabilities — again, necessitating integration on the back end.

Consequently, enterprises that want to avoid the large investment of time and money required to build or integrate private cloud stacks are now looking to adopt preintegrated products based on infrastructure platforms designed to support cloud-enabled apps and data. And, as our recent research reveals, these organizations prefer converged and hyper-converged infrastructures (HCIs) to traditional three-tier architectures to host their private cloud environments.

 

Read more

curata__d1NS2937h9ezbqI.gif

Worldwide Enterprise Storage Market Sees Modest Decline in Third Quarter, According to IDC

| idc.com
curata__d1NS2937h9ezbqI.gif

Total worldwide enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in the third quarter of 2016 (3Q16), according to the International Data Corporation (IDC) Worldwide Quarterly Enterprise Storage Systems Tracker. Total capacity shipments were up 33.2% year over year to 44.3 exabytes during the quarter. Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion. Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

“The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, Storage Systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

WW Enterprise Storage Market Down 3% in 3Q16 From 3Q15

| storagenewsletter.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Total WW enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in 3Q16, according to the IDC Worldwide Quarterly Enterprise Storage Systems Tracker.

Total capacity shipments were up 33.2% year over year to 44.3 EBs during the quarter.

Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion.

Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, storage systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more

curata__H1A67t5K0mKtOHA.jpeg

HPE sees Synergy in hybrid cloud infrastructure

| PC World Australia
curata__H1A67t5K0mKtOHA.jpeg

HPE originally pitched its Synergy line of “composable” IT infrastructure as a way to bring the flexibility of cloud services to on-premises systems. Now it’s turning that story around, putting those same Synergy components — and some new ones — into the public cloud with the goal of simplifying hybrid IT management.

The new components of Synergy made their debut in London on Tuesday, at HPE Discover, an event for the company’s customers and partners.

Among the new offerings are a software update for the HPE Hyper Converged 380 server, and a new version of HPE Helion CloudSystem. Both incorporate new cloud management functions intended to simplify the automation of repetitive tasks. There are also two new ways to pay for it all, HPE Dynamic Usage for Hyper Converged Systems, and HPE Flexible Capacity Service, and some deft financial engineering to move some of the business risks onto partners.

HPE’s goal is to allow IT departments to act as service providers for their organization, rather than maintaining infrastructure.

“The most efficient way to deal with the cloud point of view is to take it from an application perspective, start with the workload and derive the infrastructure to support it,” said Matt Foley, HPE’s director of cloud presales in Europe, the Middle East and Africa.

Read more

curata__ORomhHqmIknPDOH.jpeg

SUSE acquires OpenStack IaaS and Cloud Foundry PaaS assets from HPE

| Geekzone
curata__ORomhHqmIknPDOH.jpeg

SUSE has entered into an agreement with Hewlett Packard Enterprise (HPE) to acquire technology and talent that will expand SUSE’s OpenStack Infrastructure-as-a-Service (IaaS) solution and accelerate SUSE’s entry into the growing Cloud Foundry Platform-as-a-Service (PaaS) market.

The acquired OpenStack assets will be integrated into SUSE OpenStack Cloud, and the acquired Cloud Foundry and PaaS assets will enable SUSE to bring to market a certified, enterprise-ready SUSE Cloud Foundry PaaS solution for all customers and partners in the SUSE ecosystem. The agreement includes HPE naming SUSE as its preferred open source partner for Linux, OpenStack and Cloud Foundry solutions. In addition, SUSE has increased engagement with the Cloud Foundry Foundation, becoming a platinum member and taking a seat on the Cloud Foundry Foundation board. 

“The driving force behind this acquisition is SUSE’s commitment to providing open source software-defined infrastructure technologies that deliver enterprise value for our customers and partners,” said Nils Brauckmann, CEO of SUSE. “This also demonstrates how we’re building our business through a combination of organic growth and technology acquisition. Once again, this strategy sends a strong message to the market and the worldwide open source community that SUSE is a company on the move.”

Ashish Nadkarni, program director, Computing Platforms, for IDC, said, “This expanded partnership and transfer of technology assets has the potential to be a real win-win for SUSE and HPE, as well as customers of both companies. SUSE has proven time and again it can successfully work with its technology partners to help organizations glean maximum benefit from their investments in open source. SUSE is positioning itself very well as a high-growth company with the resources it needs to compete in key market segments.”

As part of the transaction, HPE has named SUSE as its preferred open source partner for Linux, OpenStack IaaS and Cloud Foundry PaaS. HPE’s choice of SUSE as its preferred open source partner further cements SUSE’s reputation for delivering high-quality, enterprise-grade open source solutions and services.

Abby Kearns, executive director of the Cloud Foundry Foundation, said, “SUSE has been a powerful player in the enterprise open source world for more than two decades, and I’m excited to see the impact that a SUSE Cloud Foundry distribution will have for enterprises and developers around the world. SUSE’s strategic vision for the convergence of Platform-as-a-Service and Container-as-a-Service technologies will also be a welcome addition to the strategic dialogue we have within the Cloud Foundry Foundation community.”

Read more

curata__6DHyHCFQ6NRz75T.png

HPE core servers and storage under pressure

| The Register
curata__6DHyHCFQ6NRz75T.png

HPE’s latest results show a company emerging slimmer and fitter through diet (cost-cutting) and exercise (spin-merger deals) but facing tougher markets in servers and storage – the new normal, as CEO Meg Whitman says.

A look at the numbers and the earnings call from the servers and storage points of view shows a company with work to do.

The server business saw revenue of $3.5bn in the quarter, down 7 per cent year-on-year and up 5 per cent quarter-on-quarter. High-performance compute (Apollo) and SGI servers did well. Hyper-converged is growing and has more margin than the core ISS (Industry Standard Servers). Synergy and mission critical systems also did well.

But the servers business was affected by strong pressure on the core ISS ProLiant racks, a little in the blade server business, and also low or no profitability selling Cloudline servers, the ones for cloud service providers and hyperscale customers.

In the earnings call, Meg Whitman discussed the ISS business, saying: “Other parts of the server business are doing really well. And I think that core ISS rack deterioration has a number of different things. One is in part our execution in the channel and pricing and things like that. And the second is the move to the public cloud.”

She also mentioned that there was increased competition from Huawei in servers.

Her answer is: “We need to shore up core ISS racks with improvements in the channel, improvements in quote to cash, and focus – more focus on the distributors and VARs for the volume-related ISS rack business.”

She thinks the ISS business can grow 1-2 per cent if this is done and because profitable gear like storage gets attached to these servers, HPE is “gaining share profits in this business”.

Hyper-converged

Although HPE’s CEO said hyper-converged was doing well, there is some way to go. Gartner ranks HPE as the leader in the hyper-converged and integrated systems magic quadrant, with EMC second and Nutanix third.

The analysis company’s researchers said: “Hewlett Packard Enterprise offers multiple converged, hyper-converged, reference architectures and point systems of various design points. But as the volume market leader in many segments (including blade and rack servers), it is only logical that HPE should be a leading vendor in this market.”

As Nutanix is just a hyper-converged player then HPE is not a leader in hyper-converged systems with its HC 380. The Gartnerites point out that “HPE is a relative late starter in HCIS and is frequently absent from competitive hyper-convergence evaluations versus more established vendors.”

An August Forester Wave report on hyper-converged systems put HPE in eighth position. Forester’s researchers said: “HPE’s product is in its early stages, and… its position in the HCI segment should improve quickly over time.”

Nothing was said in the call about any merger or acquisition in this area. There have been rumours about HPE and SimpliVity getting together.

Storage

In the all-flash array (AFA) business, HPE grew 3PAR AFA revenues 100 per cent year-on-year to a $750m annual run rate, which compares with NetApp at $1bn and Pure at $631m. Our sense is that Dell-EMC leads this market, followed by NetApp, then HPE, with Pure in fourth place.

Whitman said: “All-flash now makes up 50 per cent of our 3PAR portfolio and interestingly still only comprises 10 per cent of the data centre. So we see more running room in our all-flash business. And… we’re introducing new deduplication technology that should provide some further uplift in all-flash array, because there has been a gap in our portfolio.”

Comparing HPE to other AFA suppliers we see Dell EMC with five AFA products: XtremIO, DSSD, all-flash VMAX and Unity, and an all-flash Isilon product. NetApp has three: EF series, SolidFire, and all-flash FAS. Pure has its FlashArray and is developing FlashBlade. HPE has the single all-flash 3PAR product. This looks to be insufficient to cover the developing AFA use cases such as high-speed analytics, scale-out cloud service provision and file access.

There is no sense from HPE that it recognises this as a problem area. We see here a reflection of a view that HPE has a proliferation of server products and a relative scarcity of successful storage products. Historically in HPE, server and storage business units have followed separate paths. With them both inside Antonio Neri’s Enterprise Systems organisation, any such separateness should diminish.

 

Read more

curata__1NCtCxlmhg23nE6.jpeg

Hewlett-Packard revenues shrink, just like the company

| MarketWatch
curata__1NCtCxlmhg23nE6.jpeg

The separation that split Hewlett-Packard Co. into two smaller companies a year ago has done nothing to turn either company into the nimbler, faster growing entities Meg Whitman hoped for.

That was clear on Tuesday, when both Hewlett Packard Enterprise Co. HPE, +1.76% and HP Inc. HPQ, +1.13%  reported their fiscal fourth-quarter results. HPE, focused on the corporate computing market selling servers, networking equipment and services, reported a 7% drop in revenue in the fourth quarter, and annual revenue of $50.1 billion was down 4% from fiscal 2015, adjusted for the split. HPE claimed that revenue was up 2% year-over-year, when adjusted for divestitures and currency.

The PC and printing focused company fared slightly better in the fourth quarter, with 2% growth in revenue, based on strong sales of some of the company’s new PC products, but overall for fiscal 2016, revenue fell 6% to $48.2 billion.

So both companies are suffering from shrinking revenue while undergoing massive layoffs and stock buybacks that have placated investors, but done little to actually strengthen the companies while forcing restructuring charges. Still, with the strong stock performance of the past year on their side, HP leaders spoke optimistically of their companies.

That course is to get even smaller, in the case of HP Enterprise, with plans announced earlier this year to spin/merge both its services and its software business with CSC and MicroFocus respectively. Again, Whitman is promising that these deals will enable HP Enterprise to be “more nimble, provide cutting-edge solutions, play in higher-growth markets, and have an enhanced financial profile” sometime in 2017, when the deals are both complete.

But both HPs are still in too many legacy, slower-growing businesses to believe growth is around the corner. HP Inc.’s year-over-year 4% gain in overall PC sales, due to more competitive PCs, was overshadowed by the printing business, where sales of supplies (including printer ink) dropped 12%. HP executives pointed out that this decline had abated from a steeper drop of 18% in supplies in the previous third quarter, but slower shrinkage didn’t satisfy investors, and shares were down 2.3% in after-hours trading.

HP Enterprise is still burdened with a legacy server business that declined 6%, even as its high-performance computing business is growing with help on the way from the acquisition of SGI Corp. Whitman has bet on companies eventually moving to hybrid cloud structures, a mixture of on-premises and remote servers, but she admitted Tuesday that HPE is “definitely seeing impact” from potential clients choosing public cloud-computing environments.

If HP Inc. can’t get both its legacy consumer businesses to grow at the same time, and HP Enterprise can’t convince companies to at least partly eschew public cloud services like Amazon Web Services AMZN, +0.10% growth will be nearly impossible for either company. That only leaves Whitman and Weisler with more spinouts, divestitures, layoffs and stock buybacks to distract from their shrinking revenue.

Read more