Posts Tagged data center cooling

Cooling Capabilities for the Data Center and Beyond

Posted by on January 10, 2013  |  No Comments

An out-of-date, inefficient cooling system has a significant impact on your bottom line no matter what business you’re in. Schneider Electric can upgrade your facility with a full range of Business-wise, Future-driven cooling solutions that are scalable, flexible, reliable and efficient

EcoBreeze: Free Cooling

Posted by on January 10, 2013  |  No Comments

Are cooling costs killing you? Schneider Electric has the solution: EcoBreeze — the industry’s most innovative, energy efficient and cost-effective approach to free cooling.

Building the Data Center of the Future: Switch Communications

Posted by on January 10, 2013  |  No Comments

 

Are you wondering what the data center of the future will look like? Switch Communications already knows. Switch is breaking new ground with a 407,000 square foot co-location facility designed to handle super-high densities of up to 1,500 watts per square foot. And they’re relying on Schneider Electric’s broad portfolio of products to make it possible – from APC’s Symmetra Megawatt UPS and Square D’s switch gear to our building management system and Pelco’s security cameras.

What’s Inside The Datapod System

Posted by on November 26, 2012  |  No Comments

The Datapod System utilizes Schneider Electric’s best in class power and cooling products

 

The Datapod System utilizes Schneider Electric’s Best in Class Power and Cooling Products. The APC Infrastructure Suite of power and cooling products are recognized as the World’s Leader for reliability and performance.

Power:  The intelligence of the Datapod System is that each component is scalable and modular. Beginning with the Award-Winning APC Symmetra Line of Uninterruptible Power Supply Unit (UPS). These units start at 40kW and can scale up to 2MW.

Cooling: Free-Air, Air/Water Economizers, In-Row, Hot-Aisle Containment, Cold-Aisle Containment options Cooling up to 33kW per rack possible. Datapod enables end users to save up to 40% savings in cooling costs.

Racks:  Up to 50U available. Network or Standard. The Datapod System can incorporate 20‘ or 40‘ Intermodal Compliant ISO Containers. The insulated, anti-static design provides the optimal IT environment.

Pod Management: The Datapod System is completely managed by a suite of software that allows the user to manage, control and trouble shoot potential problems remotely and via smart phone or tablet computer.  The IT assets, fire control systems, electrical and cooling systems are all monitored via standard IP networks.

Universal Networking Services is proud to be the North America Authorized Agent for Datapod™ .  Our partnership with Datapod™ allows us to deliver a unique alternative to the traditional bricks and mortar data center installation. We can provide the data center community an alternative solution that maximizes their investment and increases the reliability and availability of their mission-critical facility.  Datapod is an unique, modular data center system that incorporates innovative design and cutting edge mechanical and electrical engineering. Datapod has extended the concept of modular data center design to include critical site infrastructure such as modular generators, chillers, and deployment services thereby providing a complete infrastructure solution for data centers. By enabling data center users to deploy when they like, where they like and for how long they like, the Datapod system offers performance superior to that of  a “bricks and mortar” data center facility, deploys faster and at a more cost-effective price point.

Please feel free to email us at info@datapodnorthamerica.com or contact us to learn more.

Don’s Corner: In-row or Perimeter Cooling, is there really a difference, and if there is, which is better?

Posted by on October 4, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

I love two-part questions, don’t you?  It shows me that the person asking it probably has some experience with the subject and is asking a loaded question.  Usually, the inquiry stems from an experience that didn’t go all too well, or, it went much better than planned and the question is asked more from a validation standpoint than an argument.  I’d much rather prefer the latter, of course, but in data centers, that’s usually not the case, is it?

With any subject that’s controversial, you’ll never win everyone over.  In the argument over In-row or Perimeter Cooling, the subject is more volatile, especially if the inquisitor’s job is to maintain an antiquated (read dinosaur) perimeter cooling system.  You just know they’re thinking to themselves, “If that system goes away…now what?”  Ever see the latest version of Willie Wonka and The Chocolate Factory?  When the machines took over production, Charlie’s dad was laid off, but in the end, he came back to work as a repair technician on the machine that replaced him.  What made Mr. Wonka decide to replace a human with a machine?  Was it production, quality, or floor space?  If you’re a Co-lo provider, floor space is high-dollar real estate!  So, for this article, let’s use floor space as our filter, and to keep the playing field even, we’ll assume a chilled water system.

After some research, and I’m putting that lightly, I’ve found the Liebert DS to be the most commonly deployed air conditioning system for data center use.  Really, I’ve seen them all over the globe!  Why shouldn’t they be the most common?  They’re easy to train on, so you can get training almost anywhere; they have a long service history, so MTTF rates are pretty accurate; they’re supported by a ton of people, so the MTTR rates are well documented, too.  Of course they have to be shut down to make most repairs, but that topic’s for another discussion.  Even though they’re the most common, the Liebert DS, and perimeter cooling in general, is losing ground to more efficient, close-coupled, In-Row and overhead solutions.

Floor space is the name of the game, so what kind of real estate does the most commonly found perimeter unit gobble up?  Simple answer: 21sq ft.  I said “simple answer” because I made a comment above about having to turn them off for repairs.  Which is why I found that most perimeter cooling believers had more than one Liebert in the room.  Even without a calculator, that’s 42 sq ft, folks, and that’s only at N+1 redundancy.  Within that 21 sq ft area, a Liebert DS can cool up to 42kW of critical load.  Not too shabby, but if you have any kind of redundancy requirement, you’re trying to get the most out of that space.  With a Co-location facility, now you’re looking at how many customers can I not fit into the room because you’re forced to give up 21 sq ft of real estate?  That kind of talk doesn’t fly too well in a boardroom.  So what’s the alternative?  Let’s look at one, the APC In-row, close coupled cooling, or IRRC.

The In-row RC, takes up only about 3.5 sq ft, but it makes up for it’s diminutive nature by removing 18kW of heat load.  With reference to floor space (…and this is where, somewhere out there, my high school geometry teacher is laughing her evil laugh), you’d need a bunch of IRRCs to equal the same amount of real estate consumed by a Liebert DS.  The answer is 6, Ms. Davidson (flashback to 9th grade).  With 6 IRRCs, one could cool up to 108kW of critical load!   Before you ask, no, you won’t need to make them redundant, as the most common failure items, the fans and power supplies, are all hot-swappable.  Here’s something else to chew on: the IRRCs don’t have to be right next to each other to get the job done.  At about half a rack wide, they can be separated around the room as needed, but you’ll need a sawzall and a darn good explanation afterward if you try to do the same thing with a legacy style perimeter unit!  Please, if you do this to a perimeter unit, invite me out and I’ll buy lunch?

Knowing that I’d have to prove my math, I worked the number backward and concluded that with 108kW of critical load, one would need 2.5 Liebert DS units to remove that much heat, or step up to the next size unit.

That’s where it really strikes home for me, personally.  After performing a CFEP, I’m often challenged with explaining why its important to let go of the ancient ways and consider converting from a perimeter cooled space to a modern, close-coupled solution.  If floor space is the driving factor, then the math, for me, drives the point home fairly well.  If a Co-lo data center can increase their customer base from 12 to 108kW, simply by becoming more efficient in their cooling architecture, maybe Charlie Bucket’s dad might not be laid-off after all.  In today’s job market, that means a lot.

Don Melchert

Make Your Data Center More Efficient-TradeOff™ Tools

Posted by on April 25, 2012  |  No Comments

APC TradeOff ToolsTM, are web-based applications with easy-to-use interfaces designed for use in the early stages of data center concept and design development. By enabling data center professionals to experiment with various scenarios regarding virtualization, efficiency, power sizing, capital costs, and other key design issues, APC TradeOff Tools break down major data center planning decisions into a series of smaller, more manageable decisions. Use of these tools helps validate, through modeling, the overall design of a data center.

What are TradeOff Tools?

TradeOff Tools are simple, interactive tools, based on data and science, that make it easy to vary parameters, experiment with “what if” scenarios and make tradeoffs during data center planning.

  • Simple, automated tools to support specific planning decisions
  • Models complex interactions of systems based on data and science
  • One-screen, standardized user interface
  • Instant output allows for rapid creation of “what if” scenarios

When should they be used?

Used early in the planning process, TradeOff Tools help avoid planning roadblocks by making informed and accurate decisions

How do they help in planning a data center?

TradeOff Tools help show quantifiable, tangible benefits of implementing certain technologies and justify project decisions.

Video Tutorial Presentation of APC TradeOff Tools™

Data Center Efficiency Calculator (Click Here To Download):

Impact of alternative power and cooling approaches on energy costs.

The purpose of this tool is to show how various design decisions and operating conditions affect the efficiency and electrical costs of a typical generic data center.  As the user inputs details regarding the power and cooling configuration results are calculated based upon a tested and validated three parameter model.

Profiles a data center and calculates the resulting efficiency and electrical cost based on data center characteristics. Users can then understand the impact each key data center decision has on the data center’s efficiency.

Cooling Economizer Mode PUE Calculator (Click Here To Download):

Impact of geography and cooling characteristics on PUE, energy cost, and carbon emissions.

The purpose of this tool is to compare seven common cooling architectures and demonstrate their expected annual PUE, energy cost, and carbon emissions.  As the user inputs details such as the data center location and power & cooling configuration inputs such as IT inlet temperature, % load, and type of power & lighting, results are calculated.

UPS Efficiency Comparison Calculator (Click Here To Download):

Impact of UPS efficiencies on energy costs and carbon footprint.

The purpose of this tool is to compare the efficiencies of two UPS systems and to show the impact these efficiencies have on electricity cost and carbon footprint.  UPSs may be selected from a pull down list, or users can define their own UPS (Schneider Electric or other vendor).  Pre-populated data was obtained by curve fitting to measured efficiency data .  All measurements were taken in normal operating mode, at typical environmental conditions, with nominal elctrical input and balanced resistive load (PF=1.0) output.

Data Center Carbon Calculator (Click Here To Download):

Impact of changes in data center efficiency on energy costs and carbon footprint.

The purpose of this tool is to recognize how “green” a data center is by converting energy usage rates into carbon emissions. The tool illustrates how hypothetical changes to a data center’s location, efficiency, and power load can impact carbon dioxide emissions and the electric bill.

Illustrates how changes to a data center’s location, efficiency, and power load can impact carbon dioxide emissions and the electric bill. This provides management with a general indication of how “green” their data center is today and how “green” it could be.

Data Center Design Planning Calculator (Click Here To Download):

Impact of physical infrastructure technology and growth plan strategies on key design parameters.

This tool allows key decision makers to analyze these parameters, evaluate tradeoffs, and make decisions, to avoid costly mistakes that can magnify and propagate through later deployment phases.

IT Carbon & Energy Allocation Calculator (Click Here To Download):

Impact of efficiency, load characteristics, and location on energy and carbon allocation for IT users.

The purpose of the tool is to help data center operators assign carbon and energy costs to IT users. Energy (cost) and carbon allocations are computed on a per-server basis, based on an “average” server.  The units of “average” server can then be apportioned to the IT users using a method od choice depending on the business model.  This tool allows IT users to make smarter decisions regarding their total cost, as they consider options such as virtualization and server retirement.

Virtualization Energy Cost Calculator (Click Here To Download):

Impact of server virtualization and data center design choices on energy and space savings.

This tool illustrates potential IT, physical infrastructure, and energy savings resulting from the virtualization of servers. It allows the user to input data regarding data center capacity, load, number of servers, energy cost, and other data center elements.

Comprehends IT and physical infrastructure characteristics and calculates energy savings resulting from the virtualization of servers. This allows the user to test the impact of virtualization and various physical infrastructure improvements on their data center floor space and on their energy consumption.

Data Center Capital Cost Calculator (Click Here To Download):

Impact of physical infrastructure design changes on capital costs.

This tool identifies calculates capital costs based on parameters including load, redundancy, density, and power/cooling characteristics, the tool can project the number of racks required and the floor space required.

Identifies key data center physical infrastructure parameters and calculates capital costs based on those parameters. This allows data center users to judge how changes to data center location, IT load, and cooling and power infrastructure can impact overall capital costs

Data Center Power Sizing Calculator (Click Here To Download):

Impact of server and storage configurations on IT load capacity and required utility input power.

This tool defines basic characteristics of the IT load and calculates how much utility input power would be required to support that load, allowing users to experiment with “what if” scenarios by modifying the load characteristics of servers, mainframes, and storage. Total load is then calculated and the tool generates a corresponding utility power requirement.

Defines basic characteristics of the IT load and calculates how much utility input power would be required to support such a load. This provides users with a general idea of how much power in kilowatts they will need to run their data center.

Data Center AC vs. DC Calculator (Click Here To Download):

Impact on data center efficiency of various AC and DC power distribution architectures.

Users can compare the energy efficiency of four different power distribution architectures, including Legacy AC (typically 208 V with older data center UPS, PDU, and IT power supplies), Best Practice AC (208 V with latest generation UPS, PDU, and IT power supplies), 415 V AC (same modern components as Best Practice AC, but eliminates the PDUs and assumes 230 V AC power supplies), and 380 V DC (uses a theoretical DC UPS, no PDUs, and IT power supplies with 1.5% efficiency benefit).

Compares four different AC and DC power distribution architectures and calculates their respective efficiencies. This allows the user to make an educated decision on the optimal architecture for their data cente

Data Center InRow™ Containment Selector (Click Here To Download):

Impact of preferences and constraints on the recommended containment approach.

This tool generates a prototype rack and row cooling configuration based on the user’s layout preferences and the physical constraints of the room.

Generates a prototype rack and row cooling configuration based on the user’s preferences and the physical constraints of the room. This provides the user with their optimal InRow cooling containment configuration.

InRow Ancillary IT Equipment Cooling Calculator (Click Here To Download):

Impact of IT, cooling, & room characteristics on ability for row-based cooling to support ancillary IT loads.

This calculator helps the data center designer determine if additional cooling must be provided or if the existing row-based cooling is sufficient as miscellaneous ancillary IT equipment is added outside of the rows, such as tape silos, storage equipment, and networking gear.

Find out how Universal Networking Services brings a comprehensive solution from the utility pole to the server and assists with navigating the complex waters of most size and scope of projects. Whether you are upgrading, retrofitting or developing a new design-build, UNS and its partners generate efficient, scalable, reliable and manageable critical infrastructure solutions to your organization. Our holistic, common sense approach lowers our clients Total Cost of Ownership (TCO) and maximizes efficiencies offered by the advancements in critical power and cooling infrastructure.

Please feel free to contact us to learn more.

Preventive Maintenance Strategy for Data Centers

Posted by on April 1, 2012  |  No Comments

White Paper 124

This white paper highlights data center power and cooling systems preventive maintenance (PM) best practices. Hands-on PM methods (i.e., component replacement, recalibration) and non-invasive PM techniques (i.e., thermal scanning, software monitoring) are reviewed. The industry trend towards more holistic and less component-based PM is also discussed.

The term preventive maintenance (also known as preventative maintenance) implies the systematic inspection and detection of potential failures before they occur. PM is a broad term and involves varying approaches to problem avoidance and prevention depending upon the criticality of the data center. Condition-based maintenance, for example, is a type of PM that estimates and projects equipment condition over time, utilizing probability formulas to assess downtime risks.

PM should not be confused with unplanned maintenance, which is a response to an unanticipated problem or emergency. Most of the time, PM includes the replacement of parts, the thermal scanning of breaker panels, component / system adjustments, cleaning of air or water filters, lubrication, or the updating of physical infrastructure firmware.

At the basic level, PM can be deployed as a strategy to improve the availability performance of a particular data center component. At a more advanced level, PM can be leveraged as the primary approach to ensuring the availability of the entire data center power train (generators, transfer switches, transformers, breakers and switches, PDUs, UPSs) and cooling train ACs, CRAHs, humidifiers, condensers, chillers).

A data center power and cooling systems preventive maintenance (PM) strategy ensures that procedures for calendar-based scheduled maintenance inspections are established and, if appropriate, that condition-based maintenance practices are considered. The PM strategy should provide protection against downtime risk and should avoid the problem of postponed or forgotten inspection and maintenance. The maintenance plan must also assure that fully trained and qualified maintenance experts observe the physical infrastructure equipment (i.e., look for changes in equipment appearance and performance and also listen for changes in the sounds produced by the equipment) and perform the necessary work.

“Preventative Maintenance Strategy for Data Centers” Full White Paper 124 (Click Here)

Executive Summary:

In the broadening data center cost-saving and energy efficiency discussion, data center physical infrastructure preventive maintenance (PM) is sometimes neglected as an important tool for controlling TCO and downtime. PM is performed specifically to prevent faults from occurring. IT and facilities managers can improve systems uptime through a better understanding of PM best practices. This white paper describes the types of PM services that can help safeguard the uptime of data centers and IT equipment rooms. Various PM methodologies and approaches are discussed. Recommended practices are suggested.

Contents:

  • Introduction
  • PM Outcomes
  • Evolution of PM
  • Evidence of PM progress
  • Why Physical Infrastructure Components Fail
  • Recommended Practices
  • PM Options

Conclusion:

PM is a key lifeline for a fully functioning data center. Maintenance contracts should include a clause for PM coverage so that the data center owner can rest assured that comprehensive support is available when required. The current PM process must expand to incorporate a “holistic” approach. The value add that PM services provide to common components today (such as a UPS) should be expanded to the entire data center power train (generators, transfer switches, transformers, breakers and switches, PDUs, UPSs) and cooling train (CRACs, CRAHs, humidifiers, condensers, chillers).

As of today, the PM provider in the strongest position to provide such a level of support is the global manufacturer of data center physical infrastructure. An integrated approach to PM allows the data center owner to hold one partner accountable for scheduling, execution, documentation, risk management, and follow up. This simplifies the process, cuts costs, and enhances overall systems availability levels.

White Paper 124 Written By:

Thierry Bayle

Find out how Universal Networking Services can help your organization incorporate a preventative maintenance program that will keep your APC by Schneider Electric systems running at maximum efficiency.  Our service policy is to prevent problems before they occur. Advance power and cooling systems contain components and parts that will wear out over time. Proper care and regular maintenance will help you avoid unnecessary downtime, saving you time and money. To be sure your system is receiving the care it needs, you need specially trained staff, who are familiar with the inner workings of the APC by Schneider Electric products. Preventive Maintenance services give your system the level of service it needs and you the peace of mind you deserve.

Please feel free to contact us to learn more.

What Is A Business-wise, Future-driven™ Data Center?

Posted by on March 11, 2012  |  No Comments

UNS is critical facilities. Our vision incorporates a “Single Solution Provider” in the data center market that specifically targets the small to medium enterprises.

UNS brings significant value to the customer by viewing the IT and facility as a single entity. This approach allows us to bring highly reliable, cost effective solutions to your organization. The UNS approach reduces acquisition costs, enables faster deployment and brings the best of each component of the data center and makes it available to the customer at competetive prices. Our partnership with Schneider Electric, the global specialist in energy management, brings together the world’s leading manufacturers of critical components to ensure that your organization gets a reliable, cost-effective, right-sized solution.  A solution that is Business-wise, Future-driven™.

Access Schneider Electric’s 10 Ways for Your Data Center to be Business-wise, Future-driven™ to see how Schneider Electric data center physical infrastructure enables companies to adapt data centers at the speed of business to meet ever-changing business needs—now and in the future.

An excellent example of a Business-wise, Future-driven™ data center is Mercy Health.  Mercy Health is a cutting-edge health care organization that has partnered with Schneider Electric for all their data center needs over the past six years. Schneider Electric’s Business-wise, Future-driven™ data center has allowed Mercy to shift their business model from a hospital supported by a data center to a data center system supporting hospitals.  Please enjoy Mercy Health: Beyond the Digital Hospital” for more information.

To learn more about how Universal Networking Services can help you achieve a Business-wise, Future-driven™ data center please visit www.criticalpowerandcooling.com or contact us.


Don’s Corner: “Is Perimeter Cooling Dead?”

Posted by on March 6, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

“IS PERIMETER COOLING DEAD?”

Traditional data center “room” cooling is unable to accommodate the latest generation of high and variable density IT equipment resulting in cooling systems that are inefficient, unpredictable, and low in power density.  To address these problems row-oriented and rack-oriented cooling architectures have been developed.  Our presentation, “Is Perimeter Cooling Dead” examines and contrasts the 3 basic cooling approaches: room, row, and rack architectures. Each approach has an appropriate application but row-orientated cooling is emerging as the solution of choice for most next generation data centers. Next generation data centers demand the flexibility, predicability, scalability, reduced electrical power consumption, reduced TCO, and optimum availability that row and rack-oriented cooling architectures can provide. Additionally, the factors that gave rise to the establishment and use of the raised floor in the data center environment are presented.  For many applications the use of the raised floor is no longer justified or desirable. To learn more about this topic, “Is Perimeter Cooling Dead” is available for your organization via our unique “Breakfast and Learn” Educational Series For Data Center Professionals.  This series provides the education that will lay the critical found for your organization to run an efficient data center.  Please feel free to contact me for further information at d.melchert@apcdistributors.com or 918-760-8236.

The Advantages of Row and Rack-Oriented Cooling Architectures for Data Centers (White Paper #30) Overview:

Executive Summary:

Latest generation high density and variable density IT equipment create conditions that traditional data center room cooling was never intended to address, resulting in cooling systems that are inefficient, unpredictable, and low in power density. Row-oriented and rack-oriented cooling architectures have been developed to address these problems. This paper contrasts room, row, and rack architectures and shows why row- oriented cooling will emerge as the preferred solution for most next generation data centers.

Highlights:

  • Discuss the following cooling approaches:  room, row and rack-based cooling architectures.
  • Benefit comparison of cooling architectures:  challenges in agility, availability, lifecycle costs, serviceability, and manageability
  • Special issues:  capacity utilization, humidification, electrical efficiency, water near IT equipment, location and redundancy.
  • Elements of the raised floor and problems associated with using a raised floor.
  • Hurdles to eliminating the raised floor
  • Designing without a raised floor.

Conclusion:

The conventional legacy approach to data center cooling using room-oriented architecture has technical and practical limitations in next generation data centers. The need of next generation data centers to adapt to changing requirements, to reliably support high and variable power density, and to reduce electrical power consumption and other operating costs have directly led to the development of row and rack-oriented cooling architectures. These two architectures are more successful at addressing these needs, particularly at operating densities of 3 kW per rack or greater. The legacy room-oriented approach has served the industry well, and remains an effective and practical alternative for lower density installations and those applications where IT technology changes are minimal.

Row and rack-oriented cooling architecture provides the flexibility, predictability, scalability, reduced electrical power consumption, reduced TCO, and optimum availability that next- generations data centers require. Users should expect that many new product offerings from suppliers will utilize these approaches.

It is expected that many data centers will utilize a mixture of the three cooling architectures. Rack-oriented cooling will find application in situations where extreme densities, high granularity of deployment, or unstructured layout are the key drivers. Room-oriented cooling will remain an effective approach for low density applications and applications where change is infrequent. For most users with newer high density server technologies, row-oriented cooling will provide the best balance of high predictability, high power density, and adaptability, at the best overall TCO.

References:

Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms (White Paper#49) Overview:

Executive Summary:

Avoidable mistakes that are routinely made when installing cooling systems and racks in data centers or network rooms compromise availability and increase costs. These unintentional flaws create hot-spots, decrease fault tolerance, decrease efficiency, and reduce cooling capacity. Although facilities operators are often held accountable for cooling problems, many problems are actually caused by improper deployment of IT equipment outside of their control. This paper examines these typical mistakes, explains their principles, quantifies their impacts, and describes simple remedies.

Conclusion:

The air distribution system is a part of the data center that is not well understood, and facility operators and IT personnel often take actions involving airflow that have unintentional and adverse consequences to both availability and cost.

Flawed airflow implementation has not been a serious problem in the past, due to low power density in the data center. However, recent increases in power density are beginning to test the capacity of cooling systems and give rise to hot-spots and unexpected limitations of cooling capacity

Decisions such as facing all racks in the same direction are often made for cosmetic reasons to project image; but as users and customers become more educated they will conclude that people who do not implement airflow correctly are inexperienced, which is the opposite of the original intent.

Adopting a number of simple policies and providing a simple justification for them can achieve alignment between IT and Facilities staff resulting in maximum availability and optimized TCO.

References:

« Older Entries