Data centers are pushing the boundaries of the possible, using new paradigms to operate efficiently in an environment that continually demands more power, more storage, more compute capacity… more everything. Operating efficiently and effectively in the land of “more” without more money requires increased data center optimization at all levels, including hardware and software, and even policies and procedures.
Although cloud computing, virtualization and hosted data centers are popular, most organizations still have at least part of their compute capacity in-house. According to a 451 Research survey of 1,200 IT professionals, 83 percent of North American enterprises maintain their own data centers. Only 17 percent have moved all IT operations to the cloud, and 49 percent use a hybrid model that integrates cloud or colocation hosts into their data center operations.
The same study says most data center budgets have remained stable, although the heavily regulated healthcare and finance sectors are increasing funding throughout data center operations. Among enterprises with growing budgets, most are investing in upgrades or retrofits to enable data center optimization and to support increased density.
As server density increases and the data center footprint shrinks, any gains may be taken up by the additional air handling and power equipment, including uninterruptable power supplies and power generators. In fact, data center energy usage is expected to increase by 81 percent by 2020, according to CIO magazine.
Often, identifying and decommissioning unused servers during a data center optimization project is a challenge, along with right-sizing provisioning.
Virtualization makes it easy to spin up resources as needed, but it also makes tracking those resources harder. The result is that unused servers may be running because no one is certain they’re not being used. A study by the Natural Resources Defense Council and Anthesis reports that up to 30 percent of servers are unused, but still running.
A similar principle extends to storage. While data deduplication (removing duplicate files) is widely used, over-crowded storage remains an issue for small to medium-sized enterprises (SMEs). Deduplication can free much-needed storage space. For example, data deduplication along with compression can shrink data storage consumption by up to 85%. This not only addresses the budget issues mentioned above but also helps with data density much like the server density mentioned earlier. Imagine that you can save money with less storage and increase your data density at the same time . Looks lie a win-win!
If data center optimization is concerned with saving money, managers also should examine their purchasing programs. NaviSite looked for cost efficiencies within volume projects and looked at large commodity items like cabinets, racks, cabling and plug strips eliminated middlemen whenever possible. For big purchases go directly to the manufacturers in China and seek innovative young technology vendors working with them to design specifications that significantly lower the price.
Data center optimization, clearly, extends beyond hardware to become a system-wide activity. It is the key to providing more power, more capacity and more storage without requiring more money.
* This article is quite long you may want to read the source article which can be found by clicking on the link below: