Hybrid Cloud

Hybrid Cloud Gains in Popularity, Survey Finds

| Light Reading
Hybrid Cloud

The hybrid model of cloud computing is gaining more popularity in the enterprise, as businesses move more workloads and applications to public cloud infrastructures and away from private deployments.

Those are some of the findings from RightScale’s annual “State of the Cloud” report, which the company released Wednesday. It’s based on interviews with 1,000 IT professionals, with 48% of them working in companies with more than 1,000 employees.

The biggest takeaway from the report is that enterprises and their IT departments are splitting their cloud dollars between public and private deployments, and creating demands for a hybrid approach.

“The 2017 State of the Cloud Survey shows that while hybrid cloud remains the preferred enterprise strategy, public cloud adoption is growing while private cloud adoption flattened and fewer companies are prioritizing building a private cloud,” according to a blog post accompanying the report. “This was a change from last year’s survey, where we saw strong gains in private cloud use.”

Specifically, 85% of respondents reported having a multi-cloud, hybrid strategy, and that’s up from the 82% who reported a similar approach in 2016. At the same time, private cloud adoption dropped from 77% in 2016 to 72% in 2017.

In the survey, 41% of respondents reported running workloads in public clouds, while 38% said they run workloads in private clouds. In large enterprises, those numbers reverse, with 32% of respondents running workloads in public clouds, and 43% running workloads within private infrastructures.

“It’s important to note that the workloads running in private cloud may include workloads running in existing virtualized environments or bare-metal environments that have been ‘cloudified,’ ” according to the report.

When it comes to adopting cloud technologies and services, there are less barriers and concerns this year compared to 2016. The lack of resources and expertise to implement a cloud strategy was still the top concern.

In addition the report notes that in every cloud expertise level the Top 5 Challenges” indicate there is a substantial concern with “managing costs”.  One vehicle that can help manage costs is to apply data reduction technologies to your cloud deployment. Permabit VDO can be applied to public and/or private clouds quickly and easily enabling cost reduction of 50% or more in on-premise, in-transit and public cloud deployments.

Read more

curata__NiNuor421kyqGol.png

Why Deduplication Matters for Cloud Storage

| dzone.com
curata__NiNuor421kyqGol.png

Most people assume cloud storage is cheaper than on-premise storage. After all, why wouldn’t they? You can rent object storage for $276 per TB per year or less, depending on your performance and access requirements. Enterprise storage costs between $2,500 to $4,000 per TB per year, according to analysts at Gartner and ESG.

This comparison makes sense for primary data, but what happens when you make backups or copies of data for other reasons in the cloud? Imagine that an enterprise needs to retain 3 years of monthly backups of a 100TB data set. In the cloud, this can be easily equated to 3.6 PB of raw backup data, or a monthly bill of over $83,000. That’s about $1 million a year before you even factor in and data access or retrieval charges.

That is precisely why efficient deduplication is hugely important for both on-premise and cloud storage, especially when enterprises want to retain their secondary data (backup, archival, long-term retention) for weeks, months, and years. Cloud storage costs can add up quickly, surprising even astute IT professionals, especially as data sizes get bigger with web-scale architectures, data gets replicated and they discover it can’t be deduplicated in the cloud.

The Promise of Cloud Storage: Cheap, Scalable, Forever Available

Cloud storage is viewed as cheap, reliable and infinitely scalable – which is generally true. Object storage like AWS S3 is available at just $23/TB per month for the standard tier, or $12.50/TB for the Infrequent Access tier. Many modern applications can take advantage of object storage. Cloud providers offer their own file or block options, such as AWS EBS (Elastic Block Storage) that starts at $100/TB per month, prorated hourly. Third-party solutions also exist that connect traditional file or block storage to object storage as a back-end.

Even AWS EBS, at $1,200/TB per year, compares favorably to on-premise solutions that cost 2-3 times as much, and require high upfront capital expenditures. To recap, enterprises are gravitating to the cloud because the OPEX costs are significantly lower, there’s minimal up-front cost, and you pay as you go (vs. traditional storage where you have to buy far ahead of actual need)

How Cloud Storage Costs Can Get Out of Hand: Copies, Copies Everywhere

The direct cost comparison between cloud storage and traditional on-premise storage can distract from managing storage costs in the cloud, particularly as more and more data and applications move there. There are three components to cloud storage costs to consider:

  • Cost for storing the primary data, either on object or block storage
  • Cost for any copies, snapshots, backups, or archive copies of data
  • Transfer charges for data

We’ve covered the first one. Let’s look at the other two.

Copies of data. It’s not how much data you put into the cloud — uploading data is free, and storing a single copy is cheap. It’s when you start making multiple copies of data — for backups, archives, or any other reason — that costs spiral if you’re not careful. Even if you don’t make actual copies of the data, applications or databases often have built-in data redundancy and replicate data (or in database parlance, a Replication Factor).

In the cloud, each copy you make of an object incurs the same cost as the original. Cloud providers may do some dedupe or compression behind the scenes, but this isn’t generally credited back to the customer. For example, in a consumer cloud storage service like DropBox, if you make a copy or ten copies of a file, each copy counts against your storage quota.

For enterprises, this means data snapshots, backups, and archived data all incur additional costs. As an example, AWS EBS charges $0.05/GB per month for storing snapshots. While the snapshots are compressed and only store incremental data, they’re not deduplicated. Storing a snapshot of that 100 TB dataset could cost $60,000 per year, and that’s assuming it doesn’t grow at all.

Data access. Public cloud providers generally charge for data transfer either between cloud regions or out of the cloud. For example, moving or copying a TB of AWS S3 data between Amazon regions costs $20, and transferring a TB of data out to the internet costs $90. Combined with GET, PUT, POST, LIST and DELETE request charges, data access costs can really add up.

Why Deduplication in the Cloud Matters

Cloud applications are distributed by design and are deployed on non-relational massively scalable databases as a standard. In non-relational databases, most data is redundant before you even make a copy. There are common blocks, objects, and databases like MongoDB or Cassandra have replication factor (RF) of 3 to ensure data integrity in a distributed cluster, so you start out with three copies.

Backups or secondary copies are usually created and maintained via snapshots (for example, using EBS snapshots as noted earlier). The database architecture means that when you take a snapshot, you’re really making three copies of the data. Without any deduplication, this gets really expensive.

Today there are solutions to solve the public cloud deduplication or data reduction conundrum. Permabit VDO can be easily deployed in public and/or private cloud solutions  Take a look at the following blog from Tom Cook http://permabit.com/data-efficiency-in-public-clouds/ or for the technical details look at one from Louis Imershein http://permabit.com/effective-use-of-data-reduction-in-the-public-cloud/. Both provide examples and details on why and how to drive deduplication and compression solutions in a public cloud.

 

 

Read more

curata__BM0mQtBrwzT8R4R.jpeg

Effective use of data reduction in the Public Cloud

| By: (52)

  Permabit CEO, Tom Cook recently wrote about how data reduction technology can simplify the problems associated with provisioning adequate storage resources in the public cloud, while balancing performance and efficiency.  The good news is, taking advantage of data reduction software in the public cloud is easier than ever. For example,Permabit’s Virtual Disk Optimizer (VDO) is a pre-packaged software solution that installs and deploys in minutes on Red Hat Enterprise…

Read more

curata__W1e4Dlq7mDO3w4q.png

Cloud IT Spending to Edge Out Traditional Data Centers by 2020

| Datamation
curata__W1e4Dlq7mDO3w4q.png

The IT solutions market for cloud providers has nowhere to go but up.

A new forecast from IDC predicts that cloud IT infrastructure spending on servers, storage and network switches will jump 18.2 percent this year to reach $44.2 billion. Public clouds will generate 61 percent of that amount and off-premises private clouds will account for nearly 15 percent.

IDC research director Natalya Yezhkova, said that over the next few quarters, “growth in spending on cloud IT infrastructure will be driven by investments done by new hyperscale data centers opening across the globe and increasing activity of tier-two and regional service providers,” in a statement.

Additionally, businesses are also growing more adept at floating their own private clouds, she said. “Another significant boost to overall spending on cloud IT infrastructure will be coming from on-premises private cloud deployments as end users continue gaining knowledge and experience in setting up and managing cloud IT within their own data centers.”

Despite a 3 percent decline in spending on non-cloud IT infrastructure during 2017, the segment will still command the majority (57 percent) of all revenues. By 2020, however, the tables will turn.

Combined, the public and private data center infrastructure segments will reach a major tipping point in 2020, accounting for nearly 53 percent of the market, compared to just over 47 percent for traditional data center gear. Public cloud operators and private cloud environments will drive $48.1 billion in IT infrastructure sales by that year.

Indeed, the cloud computing market is growing by leaps and bounds.

The shifting sands are both predictable and evolutionary. Dominant data center spending has been platform specific and somewhat captive. As public cloud providers demonstrated, efficient data center operations are being deployed with white box platforms and high performance open -source software stacks that minimize costs and eliminate software bloat.  Corporate IT professionals didn’t miss this evolution and have begun developing similar IT infrastructures. They are sourcing white box platform’s which are much less costly than branded platforms and combining them with open-source software including operating systems, software defined storage with data reduction that drives down storage consumption too.   The result is a more efficient data center with less costly hardware and open-source software that drives down acquisition and operating costs.

The shift is occurring and the equilibrium between public and private clouds will change. Not just because of hardware but increasingly because of open-source software and the economic impact it has on building high density data centers that run more efficiently  than the branded platforms.

Read more

mainframe

Enterprise storage in 2017: trends and challenges

| Information Age
mainframe

Information Age previews the storage landscape in 2017 – from the technologies that businesses will implement to the new challenges they will face.

The enthusiastic outsourcing to the cloud by enterprise CIOs in 2016 will start to tail off in 2017, as finance directors discover that the high costs are not viable long-term. Board-level management will try to reconcile the alluring simplicity they bought into against the lack of visibility into hardware and operations.

As enterprises attempt to solve the issue of maximising a return for using the cloud, many will realise that the arrangement they are in may not be suitable across the board and seek to bring some of their data back in-house.

It will sink in that using cloud for small data sets can work really well in the enterprise, but as soon as the volume of data grows to a sizeable amount, the outsourced model becomes extremely costly.

Enterprises will extract the most value from their IT infrastructures through hybrid cloud in 2017, keeping a large amount of data on-premise using private cloud and leveraging key aspects of public cloud for distribution, crunching numbers and cloud compute, for example.

‘The combined cost of managing all storage from people, software and full infrastructure is getting very expensive as retention rates on varying storage systems differ,’ says Matt Starr, CTO at Spectra Logic. ‘There is also the added pressure of legislation and compliance as more people want or need to keep everything forever.

‘We predict no significant uptick on storage spend in 2017, and certainly no drastic doubling of spend,’ says Starr. ‘You will see the transition from rotational to flash. Budgets aren’t keeping up with the rates that data is increasing.’

The prospect of a hybrid data centre will, however, trigger more investment eventually. The model is a more efficient capacity tier based on pure object storage at the drive level and above this a combination of high-performance HDD (hard disk drives) and SSD (solid state drives).

Hybrid technology has been used successfully in laptops and desktop computers for years, but it’s only just beginning to be considered for enterprise-scale data centres.

While the industry is in the very early stages of implementing this new method for enterprise, Fagan expects 70% of new data centres to be hybrid by 2020.

‘This is a trend that I expect to briskly pick up pace,’ he says. ‘As the need for faster and more efficient storage becomes more pressing, we must all look to make smart plans for the inevitable data.

One “must have” is data reduction technologies. By applying data reduction to the software stack data density, costs and efficiency will improve.  If Red Hat Linux is part of your strategy, deplpoying Permabit VDO data reduction is as easy as plug in and go. Reducing storage consumption, data center footprint and operating costs will drop by 50% or more.

 

Read more

curata__mv01Wd5tVv1l8G2.png

2016 Review Shows $148 billion Cloud Market Growing at 25% Annually

| News articles, headlines, videos
curata__mv01Wd5tVv1l8G2.png

  New data from Synergy Research Group shows that across six key cloud services and infrastructure market segments, operator and vendor revenues for the four quarters ending September 2016 reached $148 billion, having grown by 25% on an annualized basis. IaaS & PaaS services had the highest growth rate at 53%, followed by hosted private cloud infrastructure services at 35% and enterprise SaaS at 34%. 2016 was notable as the year in which spend on cloud services overtook spend on cloud infrastructure hardware and software. In aggregate cloud service markets are now growing three times more quickly than cloud infrastructure hardware and software. Companies that featured the most prominently among the 2016 market segment leaders were Amazon/AWS, Microsoft, HPE, Cisco, IBM, Salesforce and Dell EMC.

Over the period Q4 2015 to Q3 2016 total spend on hardware and software to build cloud infrastructure exceeded $65 billion, with spend on private clouds accounting for over half of the total but spend on public cloud growing much more rapidly. Investments in infrastructure by cloud service providers helped them to generate almost $30 billion in revenues from cloud infrastructure services (IaaS, PaaS, hosted private cloud services) and over $40 billion from enterprise SaaS, in addition to supporting internet services such as search, social networking, email and e-commerce. UCaaS, while in many ways a different type of market, is also growing steadily and driving some radical changes in business communications.

“We tagged 2015 as the year when cloud became mainstream and I’d say that 2016 is the year that cloud started to dominate many IT market segments,” said Synergy Research Group’s founder and Chief Analyst Jeremy Duke. “Major barriers to cloud adoption are now almost a thing of the past, especially on the public cloud side. Cloud technologies are now generating massive revenues for technology vendors and cloud service providers and yet there are still many years of strong growth ahead.”

One way to improve the density and cost effectiveness of cloud deployments is to include scalable high performance data reduction technologies. If you are using Red Hat Enterprise Linux including Permabit Virtual Data Optimizer (VDO) will drop costs by 50% or more and improve data density too!  

Read more

curata__fAxJs4Iuwd4elLr.png

Why the operating system matters even more in 2017

| Homepage
curata__fAxJs4Iuwd4elLr.png

Operating systems don’t quite date back to the beginning of computing, but they go back far enough. Mainframe customers wrote the first ones in the late 1950s, with operating systems that we’d more clearly recognize as such today—including OS/360 from IBM and Unix from Bell Labs—following over the next couple of decades.

An operating system performs a wide variety of useful functions in a system, but it’s helpful to think of those as falling into three general categories.

First, the operating system sits on top of a physical system and talks to the hardware. This insulates application software from many hardware implementation details. Among other benefits, this provides more freedom to innovate in hardware because it’s the operating system that shoulders most of the burden of supporting new processors and other aspects of the server design—not the application developer. Arguably, hardware innovation will become even more important as machine learning and other key software trends can no longer depend on CMOS process scaling for reliable year-over-year performance increases. With the increasingly widespread adoption of hybrid cloud architectures, the portability provided by this abstraction layer is only becoming more important.

Second, the operating system—specifically the kernel—performs common tasks that applications require. It manages process scheduling, power management, root access permissions, memory allocation, and all the other low-level housekeeping and operational details needed to keep a system running efficiently and securely.

Finally, the operating system serves as the interface to both its own “userland” programs—think system utilities such as logging, performance profiling, and so forth—and applications that a user has written. The operating system should provide a consistent interface for apps through APIs (application programming interface) based on open standards. Furthermore, commercially supported operating systems also bring with them business and technical relationships with third-party application providers, as well as content channels to add other trusted content to the platform.

The computing technology landscape has changed considerably over the past couple of years. This has had the effect of shifting how we think about operating systems and what they do, even as they remain as central as ever. Consider changes in how applications are packaged, the rapid growth of computing infrastructures, and the threat and vulnerability landscape.

Containerization

Applications running in Linux containers are isolated within a single copy of the operating system running on a physical server. This approach stands in contrast to hypervisor-based virtualization in which each application is bound to a complete copy of a guest operating system and communicates with the hardware through the intervening hypervisor. In short, hypervisors virtualize the hardware resources, whereas containers virtualize the operating system resources. As a result, containers consume few system resources, such as memory, and impose essentially no performance overhead on the application.

Scale

Another significant shift is that we increasingly think in terms of computing resources at the scale point of the datacenter rather than the individual server. This transition has been going on since the early days of the web, of course. However, today we’re seeing the reimagining of high-performance computing “grid” technologies both for traditional batch workloads as well as for newer services-oriented styles.

Dovetailing neatly with containers, applications based on loosely coupled “microservices” (running in containers)—with or without persistent storage—are becoming a popular cloud-native approach. This approach, although reminiscent of Service Oriented Architecture (SOA), has demonstrated a more practical and open way to build composite applications. Microservices, through a fine-grained, loosely coupled architecture, allows for an application architecture to reflect the needs of a single well-defined application function. Rapid updates, scalability, and fault tolerance, can all be individually addressed in a composite application, whereas in traditional monolithic apps it’s much more difficult to keep changes to one component from having unintended effects elsewhere.

Security

All the security hardening, performance tuning, reliability engineering, and certifications that apply to the virtualized world still apply in the containerized one. And, in fact, the operating system shoulders a greater responsibility for providing security and resource isolation in a containerized and software-defined infrastructure world than in the case in which dedicated hardware or other software may be handling some of those tasks. Linux has been the beneficiary of a comprehensive toolbox of security-enforcing functionality built using the open source model, including SELinux for mandatory access controls, a wide range of userspace and kernel-hardening features, identity management and access control, and encryption.

Some things change, some don’t

Priorities associated with operating system development and operation have certainly shifted. The focus today is far more about automating deployments at scale than it is about customizing, tuning, and optimizing single servers. At the same time, there’s an increase in both the pace and pervasiveness of threats to a no longer clearly-defined security perimeter—requiring a systematic understanding of the risks and how to mitigate breaches quickly.

Add it all together and applications become much more adaptable, much more mobile, much more distributed, much more robust, and much more lightweight. Their placement, provisioning, and securing must become more automated. But they still need to run on something. Something solid. Something open. Something that’s capable of evolving for new requirements and new types of workloads. And that something is a (Linux) operating system.

Read more

curata__RBqgW7yFNqS6E6e.png

More than 50 Percent of Businesses Not Leveraging Public Cloud

| tmcnet.com
curata__RBqgW7yFNqS6E6e.png

While more than 50 percent of respondents are not currently leveraging public cloud, 80 percent plan on migrating more within the next year, according to a new study conducted by TriCore Solutions, the application management experts. As new streams of data are continuing to appear, from mobile apps to artificial intelligence, companies in the future will rely heavily on cloud and digital transformation to minimize complexity.

Here are some key results from the survey:

  • Public Cloud Considerations: Cloud initiatives are underway for companies in the mid-market up through the Fortune 500, though IT leaders continue to struggle with what to migrate, when to migrate, and how best to execute the process. More than half of those surveyed plan to migrate web applications and development systems to the public cloud in the next year, prioritizing these over other migrations. More than two thirds have 25 percent or less of their infrastructure in the public cloud, showing that public cloud still has far to go before it becomes the prevailing environment that IT leaders must manage. With increasingly complex hybrid environments, managed service providers will become a more important resource to help facilitate the process.
  • Running Smarter Still on Prem: Whether running on Oacle EBS, PeopleSoft or, companies rely on ERP systems to run their businesses. Only 20 percent of respondents expect to migrate ERP systems to public cloud in the next year, indicating the importance of hybrid cloud environments for companies to manage business-critical operations on premise alongside other applications and platforms in the public cloud.
  • Prepping for Digital Transformation: With the increased amount of data in today’s IT environment – from machine data to social media data to transactional data and everything in between – the need for managed service providers to make sense of it all has never been more important. 53 percent of respondents plan on outsourcing their IT infrastructure in the future, and respondents anticipate a nearly 20 percent increase in applications being outsourced in the future, as well.

As worldwide spending on cloud continues to grow, and with the increased amount of data in today’s IT environment, IT leaders need to heavily consider the keys to IT success when migrating to a cloud-based environment. Understanding how to help businesses unlock and leverage the endless data available to them, will drive IT success for managed service providers in 2017 and beyond.

 

Read more

curata__d1NS2937h9ezbqI.gif

Worldwide Enterprise Storage Market Sees Modest Decline in Third Quarter, According to IDC

| idc.com
curata__d1NS2937h9ezbqI.gif

Total worldwide enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in the third quarter of 2016 (3Q16), according to the International Data Corporation (IDC) Worldwide Quarterly Enterprise Storage Systems Tracker. Total capacity shipments were up 33.2% year over year to 44.3 exabytes during the quarter. Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion. Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

“The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, Storage Systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more

curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

WW Enterprise Storage Market Down 3% in 3Q16 From 3Q15

| storagenewsletter.com
curata__f423c5616ffe8e5b903e1d5a3d306c5b.PNG

Total WW enterprise storage systems factory revenue was down 3.2% year over year and reached $8.8 billion in 3Q16, according to the IDC Worldwide Quarterly Enterprise Storage Systems Tracker.

Total capacity shipments were up 33.2% year over year to 44.3 EBs during the quarter.

Revenue growth increased within the group of original design manufacturers (ODMs) that sell directly to hyperscale datacenters. This portion of the market was up 5.7% year over year to $1.3 billion.

Sales of server-based storage were relatively flat, at -0.5% during the quarter and accounted for $2.1 billion in revenue. External storage systems remained the largest market segment, but the $5.4 billion in sales represented a decline of 6.1% year over year.

The enterprise storage market closed out the third quarter on a slight downturn, while continuing to adhere to familiar trends,” said Liz Conner, research manager, storage systems. “Spending on traditional external arrays resumed its decline and spending on all-flash deployments continued to see good growth and helped to drive the overall market. Meanwhile the very nature of the hyperscale business leads to heavy fluctuations within the market segment, posting solid growth in 3Q16.”

Read more