Archive for the Don’s Corner Category

Don’s Corner: In-row or Perimeter Cooling, is there really a difference, and if there is, which is better?

Posted by on October 4, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

I love two-part questions, don’t you?  It shows me that the person asking it probably has some experience with the subject and is asking a loaded question.  Usually, the inquiry stems from an experience that didn’t go all too well, or, it went much better than planned and the question is asked more from a validation standpoint than an argument.  I’d much rather prefer the latter, of course, but in data centers, that’s usually not the case, is it?

With any subject that’s controversial, you’ll never win everyone over.  In the argument over In-row or Perimeter Cooling, the subject is more volatile, especially if the inquisitor’s job is to maintain an antiquated (read dinosaur) perimeter cooling system.  You just know they’re thinking to themselves, “If that system goes away…now what?”  Ever see the latest version of Willie Wonka and The Chocolate Factory?  When the machines took over production, Charlie’s dad was laid off, but in the end, he came back to work as a repair technician on the machine that replaced him.  What made Mr. Wonka decide to replace a human with a machine?  Was it production, quality, or floor space?  If you’re a Co-lo provider, floor space is high-dollar real estate!  So, for this article, let’s use floor space as our filter, and to keep the playing field even, we’ll assume a chilled water system.

After some research, and I’m putting that lightly, I’ve found the Liebert DS to be the most commonly deployed air conditioning system for data center use.  Really, I’ve seen them all over the globe!  Why shouldn’t they be the most common?  They’re easy to train on, so you can get training almost anywhere; they have a long service history, so MTTF rates are pretty accurate; they’re supported by a ton of people, so the MTTR rates are well documented, too.  Of course they have to be shut down to make most repairs, but that topic’s for another discussion.  Even though they’re the most common, the Liebert DS, and perimeter cooling in general, is losing ground to more efficient, close-coupled, In-Row and overhead solutions.

Floor space is the name of the game, so what kind of real estate does the most commonly found perimeter unit gobble up?  Simple answer: 21sq ft.  I said “simple answer” because I made a comment above about having to turn them off for repairs.  Which is why I found that most perimeter cooling believers had more than one Liebert in the room.  Even without a calculator, that’s 42 sq ft, folks, and that’s only at N+1 redundancy.  Within that 21 sq ft area, a Liebert DS can cool up to 42kW of critical load.  Not too shabby, but if you have any kind of redundancy requirement, you’re trying to get the most out of that space.  With a Co-location facility, now you’re looking at how many customers can I not fit into the room because you’re forced to give up 21 sq ft of real estate?  That kind of talk doesn’t fly too well in a boardroom.  So what’s the alternative?  Let’s look at one, the APC In-row, close coupled cooling, or IRRC.

The In-row RC, takes up only about 3.5 sq ft, but it makes up for it’s diminutive nature by removing 18kW of heat load.  With reference to floor space (…and this is where, somewhere out there, my high school geometry teacher is laughing her evil laugh), you’d need a bunch of IRRCs to equal the same amount of real estate consumed by a Liebert DS.  The answer is 6, Ms. Davidson (flashback to 9th grade).  With 6 IRRCs, one could cool up to 108kW of critical load!   Before you ask, no, you won’t need to make them redundant, as the most common failure items, the fans and power supplies, are all hot-swappable.  Here’s something else to chew on: the IRRCs don’t have to be right next to each other to get the job done.  At about half a rack wide, they can be separated around the room as needed, but you’ll need a sawzall and a darn good explanation afterward if you try to do the same thing with a legacy style perimeter unit!  Please, if you do this to a perimeter unit, invite me out and I’ll buy lunch?

Knowing that I’d have to prove my math, I worked the number backward and concluded that with 108kW of critical load, one would need 2.5 Liebert DS units to remove that much heat, or step up to the next size unit.

That’s where it really strikes home for me, personally.  After performing a CFEP, I’m often challenged with explaining why its important to let go of the ancient ways and consider converting from a perimeter cooled space to a modern, close-coupled solution.  If floor space is the driving factor, then the math, for me, drives the point home fairly well.  If a Co-lo data center can increase their customer base from 12 to 108kW, simply by becoming more efficient in their cooling architecture, maybe Charlie Bucket’s dad might not be laid-off after all.  In today’s job market, that means a lot.

Don Melchert

Don’s Corner: Who’s Responsible For The UPS, the IT Staff or Facilities Maintenance?

Posted by on September 5, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

Who’s responsible for the UPS, the IT Staff or Facilities Maintenance?

I have been asked this question many times, for all sorts of infrastructure equipment.  From the structure windows to the CRAC, I’ve heard this same question asked both before and after an expensive catastrophe has brought everyone into the boardroom.  To properly entertain answering the question, we have to approach it from two fronts: Old School and New School.  Without first defining those two parameters, there’s no way to stop the finger pointing if it all goes wrong, so let’s start there.

The “Old School” Platform

In times long ago (now if that doesn’t sound like the beginning of a new Tolkien novel, I don’t know what does) anything and everything that did not physically house the critical network was placed under the governorship of Facilities Maintenance.  Quite literally, the only exclusions were the racks, servers, telecom equipment and associated cords and peripherals.  If one wanted to relocate a server to another side of the room, powered from a different breaker, the first mission was to request approval from a Facility Manager.  Wait a second! There still are data centers out there that live under the iron rule of Facilities Maintenance, but that topic is best kept in the file labeled “Future Don’s Corner Topics”, at least for now.

If I were to provide an answer to the “Who’s responsible for the UPS?” question, based on the Old School platform, the answer is:  Facilities Maintenance, with operational approval from the IT Director. The reason for this is, originally, IT personnel focused on the devices and the network itself, and so did the educational institutions that certified the personnel of yesteryear.  This wasn’t necessarily a bad thing.  Consider this; imagine asking a Facilities Engineer in 1987 that works in the sun, rain, wind, and snow maintaining the building’s HVAC system to log into the network.  Just speaking that way to a member of the facilities staff would have caused them to panic, possibly invoking memories of scenes from the movie War Games!  However, this works both ways.  Can you imagine asking a member of the IT staff to check the run hours on the CRAC unit?  They probably would have called security on you, knowing you must have lost your mind.  Few IT personnel would have considered dirtying their hands, and no self-respecting member of the facilities staff had the patience for “C://”.   In my answer, I said “with operational approval from the IT Director.”  If you’re not immediately in agreement, give UNS a call and we’d be happy to help bring you safely back from cryo-stasis and into the new world of today’s IT management!

The “New School” Platform

IT has fought hard for the past 30+ years to improve every aspect of network operations.  This is fact, as I have, willingly enough, seen this from both sides of the fence, first, as a Facility Manager, later as part of the IT consulting community.  Unknowingly, though, we were all working toward the same goal of efficient operations, more with less.  The UPS, the In-Row Air Conditioner, the servers, even the programs we were running…everything associated with the critical IT network became self-serviceable or hot-swappable in some way.   What used to require a shutdown of the network during a midnight to 7am all-nighter to swap out a bad component now requires little or no notification, no shutdown at all, and it all takes place in about five minutes, including unwrapping it from the box!

However, all of this new technology means two things: First, it’s easier for us to access the critical network, so we just expect it to be there waiting for us, whenever we open our iPhones; Second, all of our devices expect the exact same thing, only just like The Matrix, our numbers pale in comparison to theirs, and so does our priority levels on the network.  We, the custodians of our NCPI and its assets, must maintain this uptime requirement and now, more than ever before, the IT community is prepared, both technically and internally, to do so.

If I were to provide an answer to the “Who’s responsible for the UPS?” question, based on the New School platform, the answer is: IT, with operational approval from the Director of Facilities Maintenance.  Together, with sound communication, both are providing a service to the company’s critical network and should be able to count on each other to continually provide primary and uninterrupted back-up power to the company’s Network Critical Physical Infrastructure.  Again, if this does not make immediate sense, please, don’t hesitate to call UNS for an explanation of why this shared responsibility is so important.  The job you save may be your own!

Don Melchert

Don’s Corner: “Ferrari vs. Datapod”

Posted by on July 11, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

While looking up alternate spark plugs for my old Ducati, I noticed that the ones I use were compatible with Ferrari. After my initial thought of, “Wow, that’s pretty cool!” my very next thought was, “I wonder what kind of data center they’re using to create such awesome, jaw-dropping, daydream inspiring machines?”

I’m back, sorry, I had to take a moment there for a quick run in my imaginary yellow Diablo and it’s so hard to park it and turn it off without revving the engine a few times.

Seriously though, everyone’s had that moment when you realize that, regardless of the item, it’s reached the end of its useful life and either needs a major overhaul or a replacement. For Ferrari, the decision to scrap and rebuild happens many times a year, as they build a completely new Formula One car from concept to racetrack in days. Back to point, here’s what I was able to find out about their “new-ish” data center, and it’s the disappointment I now have that’s fought it’s way out into becoming the topic of Don’s Corner this week.

“Ferrari is reserved about divulging technical details about the center, but it is of medium size — approximately 2,500 square feet in area, and loaded with about 60 racks of IBM, Sun Microsystems and Hewlett-Packard servers and tiered storage arrays. The power conversion (AC to DC) equipment consists of APC Symmetra PX 250-500 power supplies, Modular 3P PDUs, and InRow coolers. Both air and liquid cooling systems are utilized; the room itself was kept at about 23 degrees C. (about 72 degrees F.)” –eWeek.com, March 2009.

All that’s great, but I’m a CFS, I needed to see it! So, like any other intelligent human, I found myself scouring YouTube, where I found two videos from our good friends at APC. If you’re like me, you’ll want to see them too, so the videos are attached here. Go ahead, watch them back to back, at worst you’ll lose a few minutes of your time, but at a minimum, you’ll be able to have another conversation starter, right? Come back and I’ll tell you why I am so disappointed in the F1 Champ’s decision.

About half way through the first video, somewhere between the scene where there are three different trades trying to operate in the same room (before the raised floor was completed) and the scene where Mr. Calabrese is speaking in front of their new equipment, a single question jumped into mind, “Why, on earth, didn’t they simply use a Datapod?” I’m sure that if we asked Ferrari, they’d have many reasons why they didn’t, but I’ve got two reasons why they should have, and from what I’ve seen, it’s Time and Money that run the world.

Design-
Once you’ve mustered the courage to innovate, which for many companies is the hardest part, you’re now faced with the decision to retrofit an existing space, build a completely new facility or simply use a Datapod. If you’ve been through it before, a retrofit or new build can take a year (or longer, in America, it’s more like 18 months) to accomplish, with most of that time spent in the design and approvals phase. With a Datapod, everything’s modular in design, the key to its efficient deployment. Once Ferrari determined their power and cooling requirements, Datapod could have provided them a solution in days rather than weeks.

Ok, that covers the Time argument, but what about the all-important Euro? Other than minimizing the costs associated with paying inspectors and engineering firms to design a new building layout, can you imagine the number of man-hours that could have been saved using a pre-engineered Datapod solution?

Deployment-
Ferrari decided to put their data center in a historical building on campus, which makes sense, as it probably wasn’t being utilized. However, doing so required a massive retrofit before the aged building was suitable enough to house a high-density data center. Unless Ferrari already had tradesmen on staff, imagine how long it took to solicit, receive and evaluate competitive bids for the interior, let alone the new electrical service. With that on your mind, consider that a Datapod can be dropped on a relatively unimproved gravel lot. It’s that simple. Once the Utility Pod has been placed, the trucks show up, the containers roll off and are locked into place. Put another way, the Datapod solution reduces or outright eliminates invoices from all the service industry experts traditionally required when finding a home for your new network. That’s a lot of funds that could be used in R&D, ultimately used to win races this season, rather than waiting for a 20 year ROI.

Other than final testing, the power and cooling modules have already been pre-commissioned and are ready for use. What’s that, you don’t trust pre-commissioned items? That can’t be entirely true, as we have no problem plugging in a new refrigerator that has been assembled and pre-commissioned by the manufacturer. If we are willing to accept such things for our home, why not in the data center world? Is it because of the 24/7/365 requirements of today’s critical networks? Ok, I can understand that, but really, if this is the case, how many of us own a refrigerator in our kitchen that’s deployed N+1? Ask any one of the ravenous teenagers in my home and they’ll tell you, food is a serious requirement and the loss of a refrigerator would cause an immediate service disruption of outdoor chores!

Don’t get me wrong! Kudos should be given to Ferrari for, as I heard in the video, having “the courage to innovate.” The IT managers at Ferrari wanted to do bigger and better things and must have realized that you simply can not achieve new results by repeating the same mistakes, even if you do it in a new building. However, they could have saved Time and Money by going the containerized route, rather than brick and mortar. As with any other topic posted on Don’s Corner, I challenge you to look at the information objectively and you be the judge.

Just don’t let racing a Ferrari scare you, there are faster things out there. This time, it was a Datapod.

Learn more by clicking on this link to What Is Datapod?” Series Part I

Don Melchert

Don’s Corner: More Training = More Uptime

Posted by on June 22, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

Late in the night at the NOC, an alarm sounds…

John:  Hey, there’s an alarm on the UPS.”

Larry:  What is it?  Did you log in?

John:  What’s the password?

Larry:  I dunno, try “GeauxT1gers”

John:   Nope, but now it has 3 alarms.

Larry:  Oh wait…change the S to a Z and try again.  What alarms are showing?

John:  That’s it…I’m in.  I don’t know what all this means, but there’s a lot of red on the screen.  Oh, here it is, ALARMS.   It says…”Input out of tolerance…UPS on Battery…Low Battery Warning…”  We might have lost power.

Larry:  The lights are still on, but I don’t know, did we lose power?”

John:  Guess so…wait, now it says…”Battery Weak.”  What do we do?

Larry:  Go to Bypass, maybe, you think?  Do you know how to do that?

John:  Heck no!  Let’s call that UPS guy.  Where’s the book for it?

Larry:  I think it’s in Frank’s cube? If the power is out, won’t the generator kick on?

John:  Maybe it takes a few minutes to kick over?  Never mind, we just lost the network.

In working with Data Centers around the country, I’ve come to realize scenarios like this happen more often than we care to discuss.  The thing is, after the damage assessment is performed and the data analyzed, another odd similarity shows itself.  As it usually is with most accidents, every one of them is preventable, in one way or another.  While reading the scenario above, I’m sure many solutions were jumping to mind, but the one that would have made the biggest difference, in my humble opinion, is Site Training.  Let’s break it down and see how Site Training would have saved the day, possibly even prevented John and Larry from their inevitable butt chewing, or worse.

Speed– Regular Site Training would have given our confused NOC members the familiarity to know how to access the alarming UPS quickly, allowing them to have more information available to begin the process of fault analysis.  Each member of your IT support team, that’s anyone with access to your Data Center, should be trained in how to respond to an alarm on any piece of critical equipment, not just the servers.  Consider this, if your UPS has a battery runtime of less than 10 minutes, a team member must be able to respond and correct the problem in less time than it takes most people to take a shower.  How long does it take your newest team member to diagnose a fault and know what to do next?

Understanding– Properly coordinated Site Training brings to light the idiosyncrasies of your particular data center and how each piece of your NCPI is dependent upon the other.  In our scenario, once the UPS screen was accessed, their training would have allowed them to realize they had lost a phase of their utility feed and were rapidly draining their weak batteries into oblivion.  Even if they had forgotten what in the universe an electrical phase is, John or Larry would have at least realized they were missing one.  In having attended their quarterly Site Training, the late night NOC crew might have saved the day by manually starting their generator.  When is the last time your IT and Facilities practiced starting the generator and transferring the critical load on the ATS?

Confidence– Site Specific Training gives people, the backbone of any critical operation, the ability to push fear and confusion aside, allowing them to see the way out of a bad situation. John and Larry, having been provided regular Hands-on Site Training, would have been confident enough in their ability to operate the critical equipment that they would not have hesitated to get up out of their chairs and walk up to the alarming UPS to investigate things further.

As an instructor for Data Center University and today with UNS Data Center Institute, I’ve learned that the majority of today’s most intelligent professionals become frozen when faced with the fear of failing in front of their peers.  I’ve practically had to shove students toward the training lab, but open the door for lunch and off they run!  Old habits from grade school die hard, don’t they?  In many cases, both IT and Facilities staff take a “hands-off approach” when it comes to touching their NCPI assets, simply because they are afraid of causing a failure themselves.  Think about it, in the scenario above, what was the outcome?  Exactly!  Confusion and fear resulted in a total failure of their critical network.  Instead of having to explain to the CEO why their company couldn’t take orders for 5 hours, John and Larry’s IT Director could have been praising the speed, understanding and confidence of their IT Team.  Only Site Training, and the hands-on familiarity that comes with it can give you that.  Their IT Director may have had a lot more fun in the morning meeting, and hey, while the rest of the Execs are still smiling and clapping, now would be a great time to ask for that new In-Row cooling unit…and maybe even some new, comfy chairs for the NOC..?

If your organization hasn’t been afforded the opportunity to conduct Site Training in the past few months, or if you’re unsure where to even start when it comes to determining which NCPI assets to train on, never fear, UNS is here to help!

To learn more about Site Training please visit Universal Networking Services Institute (Click Here).

Don’s Corner: “Why Aren’t My Batteries Lasting As Long As They Used To?”

Posted by on June 14, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

Why aren’t my batteries lasting as long as they used to?

Scenario: Your trusty UPS has been providing faithful service for the past 8 years.  Maybe it’s had a couple of minor repairs, like fan replacements and a capacitor upgrade, but other than the full battery refresh during the 5th year, “Old Faithful” has been holding up its end of the bargain.  Lately though, it seems as though the batteries just aren’t what they used to be.  It couldn’t be the data center’s fault, could it?  Nothing’s really changed in there…well, maybe the legs were balanced after the load increased from 50% to 65%, but the room is still stable at 73F and the utility power has actually improved over the years.  However, now the UPS is displaying a “Battery Weak” alarm only after 3 years of service.  How could this be, and what is there to be done about it?  To answer those questions requires a basic understanding of the underlying issue: battery quality.

The first question’s easy to tackle and has a clear answer, so let’s start there.  Years ago, batteries weighed more, simply because they were made from quality ingredients, more specifically, there was more lead.  With changes in manufactures, factory locations and most importantly, battery quality has taken a turn for the worst.  Don’t believe me?  I have a simple experiment for you:  For the first, grab a small, 7.2AH battery made before 2006, and an equivalent battery made in 2011.  Weigh them separately and see what you find.

I think it’s safe to say that these days, Superman would be having a much easier time seeing what’s in Lex Luthor’s data center!

It’s as simple as this… lower levels of lead are being used in the construction of new batteries, and so the likelihood of getting a battery to last more than 3 years is few and far between.  Oh, the charge rate is the same (battery reaches set charge voltage), even the discharge rate is the same (time before the low limit is reached), but it’s the number of discharge cycles that has changed.  A typical deep-cycle battery delivers 100–200 cycles before it starts the gradual decline to its ultimate failure.  At less than 75%, this same battery will begin to decline exponentially faster.  Unfortunately, for the critical data community, this means we are forced to either accept the fact that we’ll be changing our batteries sooner, or we accept the fact that in order to get the same quality we’ve grown accustomed to, we’ll have to pay a premium to have it.

Hey, I never said this would be a feel good post!  Never fear though, doom and gloom isn’t my style, so, let’s see what we can do to turn it around…

Since the mission of a data center isn’t likely to become, “…provide data to end-users when it’s convenient…”, we have to decide if we want to put our time and money into maintaining our batteries, or just throwing more money at the problem.  From my standpoint, it’s always better to take better care of what you have now, rather than ask for more money from the Bean Counters before a total battery failure.  I can hear you now, “What does that mean?”  It’s easy…get a battery PM performed now!  If a battery is found to be below acceptable limits (75% capacity), don’t just replace it with the lowest bidder.  Take some time to work with a reputable source to determine which battery is best for your particular situation.  Even if the technician finds nothing wrong at all, be sure to schedule another inspection before he or she leaves your facility.  Will you pay a little more? Possibly, yes, but it’s called Preventative Maintenance for a reason, and prevention is always cheaper than what comes after making that call to your CEO that starts off something like, “Sir, our network is down because…” How often to have battery PMs performed depends on how old the batteries are and how well they’re treated.   In doing so, you’ll be able to weed out the batteries that aren’t playing well with others before a total battery failure occurs.

And let’s be honest, when does that ever happen during normal business hours?

Universal Networking Services Institute for Data Center Professionals

Posted by on April 6, 2012  |  No Comments

Data center education that will lay the critical foundation to run an efficient data center.

Data center efficiency should be a topic of significant importance to all data center operators. At Universal Networking Services (UNS), our philosophy is simple: knowledge is key to data center efficiency!  UNS Institute for Data Center Professionals offers the data center community priceless educational opportunities through numerous gateways:

Educational Gateways:

“Educational Series For Data Center Professionals”

“Breakfast and Learn Series For Data Center Professionals”

Universal Networking Services Blog

Data Center Critical Power and Cooling LinkedIn Group

Universal Networking Services Twitter

About UNS “Educational Series For Data Center Professionals”:

The “Educational Series For Data Center Professionals” is a customized training series conducted at YOUR facility that provides the education that will lay the critical foundation for your organization to run an efficient data center.  UNS works diligently with you and your staff to customize a curriculum specific to your facilities requirements that educate and showcase the latest in technologies and best practices for data center power, cooling, monitoring, security and management.  Choose either topics from current course curriculum (see course curriculum below) or customize your training.   At UNS, we believe education is key to controlling your data center costs.  With that in mind, we couple your customized training session with our signature Critical Facility Energy Profile (CFEP) assessment.  To highlight, our CFEP service provides an on-site, non-invasive, risk free analysis of your current Network Critical Physical Infrastructure (NCPI) to determine the baseline efficiency of your data center.   On the first day, we will perform a site/data center assessment (CFEP), during which, items that are affecting efficiency and reliability are compiled.  On the second day, we return to the facility to perform a customized education/training session on today’s best strategies for power, cooling, monitoring security and management using examples from YOUR facility.  The education provided will drive your organization’s total-cost-of-ownership (TCO) as low as possible.   UNS is committed to understanding our clients challenges and provide the tools needed to operate their businesses with reliability and maximum efficiency. Contact us today to learn more about this unique educational opportunity.

Current “Educational Series For Data Center Professionals” Course Curriculum:

“POWER FUNDAMENTALS”-If you’ve ever asked yourself…”What’s the difference between kVA and kW?  AC and DC, isn’t that a band? Single-phase or three-phase?…then this is the course for you! In this course, students learn the fundamentals of AC and DC power, from generation to application.

“POWER DISTRIBUTION”-“With great power comes great responsibility.”  One can have all the power in the world but efficiently distributing that power to your critical equipment is the trick that makes the difference between business as usual or lights out!  In this course, students learn the fundamentals and application of efficient power delivered in terms of both off-site and on-site power generation.

“EFFICIENT POWER MANAGEMENT-OPTIMIZING TCO”- “Generator? Check. UPS? Check. Doors secured? Check. Red lights? Check…uh-oh! What do we do now? Who will we call?  Is this covered under a service agreement?” Managing your assets is one thing but doing so in a manner that lowers your TCO and allows you to sleep better at night takes some strategy.  Increasing availability and reliability while continuously decreasing costs means you’ll have to know when to break from the crowd and try less conventional methods.  This course is designed for the professional that has a solid understanding of both “Power Fundamentals” and “Power Distribution” and is ready to develop a strategy to manage their time, manpower and assets with maximum efficiency.

“CRITICAL COOLING 101-FUNDAMENTALS OF AIR CONDITIONING”-This course explains the fundamentals of air conditioning systems and how they can be leveraged in a data center to your advantage.  Topics include:  The Properties of Heat Transfer, The Ideal Gas Law, The Refrigeration Cycle, Condensation Control, and Comfort vs. Precision Cooling.  With a solid understanding of air conditioning principles, this course enables students to make knowledgeable decision on what air conditioning solutions are right for their data center-solutions based on fact, rather than sales and marketing strategy.

“EFFICIENT COOLING-OPTIMIZING COOLING STRATEGIES AND ARCHITECTURE”-Today’s servers generate significantly more heat, and in more concentrated, confined space than they did 20 years ago. So, why are data centers still using the same cooling strategies of yesteryear? This course takes a hard look at data center cooling architectures from many angles: efficiency, reliability, TCO, feasibility and availability, enabling students to make the best choices in cooling their critical equipment.  “Understanding the difference between comfort cooling and critical cooling; understanding the different types of cooling architectures and their deployment; and developing a method of choosing one cooling strategy over another” are all topics discussed in this course offering.

About UNS “Breakfast and Learn Series for Data Center Professionals”:

The “Breakfast and Learn Educational Series for Data Center Professionals” offers the data center community multiple opportunities to learn from their peers, share experiences, and expand industry knowledge.  Our educational sessions are conducted throughout the United States quarterly and will be advertised via our Data Center Critical Power and Cooling LinkedIn Group and/or follow us on Twitter.  Our “Breakfast and Learn Series” can also be customized and conducted at your own facility.  For more information on our “Breakfast and Learn Series” please contact us.

Current “Breakfast and Learn” Discussions:

“RIGHT-SIZING VERSUS OVER-SIZING: EFFICIENCY IN THE DATA CENTER”- Forecasting and measuring the total cost of ownership (TCO) for Data Center Physical Infrastructure (DCPI) is essential for return-on-investment (ROI) analysis.  Oversizing is the main contributor to excess TCO.  Oversizing creates inefficiencies in the data center including excess capital cost, operating cost, and specifically energy cost.  The average data center operator can achieve the highest return investment in relation to DCPI through right-sizing.  Right-sizing the DCPI system to the load is the key to optimizing TCO and has the most impact on DCPI electrical consumption.  Right-sizing can potentially eliminate up to 50% of the electrical bill in real-world scenarios.  For example, potential electricity cost savings for a typical 1 MW data center has been shown to be $2,000,000 to $4,000,000 over a typical 10-year life to the facility.  Data center efficiency is key to controlling your energy costs and should be a topic of significant importance to all data center operators.  This discussion is available through our “Breakfast and Learn Educational Series For Data Center Professionals”.

“IS PERIMETER COOLING DEAD?”-Traditional data center “room” cooling is unable to accommodate the latest generation of high and variable density IT equipment resulting in cooling systems that are inefficient, unpredictable, and low in power density.  To address these problems, row-oriented and rack-oriented cooling architectures have been developed.  Our presentation, “Is Perimeter Cooling Dead” examines and contrasts the 3 basic cooling approaches:  room, row, and rack architectures.  Each approach has an appropriate application but row-orientated cooling is emerging as the solution of choice for most next generation data centers.  Next generation data centers demand the flexibility, predicability, scalability, reduced electrical power consumption, reduced TCO and optimum availability that row and rack-oriented cooling architectures can provide.  Additionally, the factors that gave rise to the establishment and use of the raised floor is no longer justified or desirable.  To learn more about this topic, “Is Perimeter Cooling Dead” is available for your organization through our unique “Breakfast and Learn” Educational Series For Data Center Professionals.

About Universal Networking Services Blog “Don’s Corner”:

Data center industry blog that discusses the most relevant topics challenging the data center industry today.  Don Melchert, Critical Facility Specialist, shares in “Don’s Corner” his extensive knowledge and experience from the mission critical arena.  “Don’s Corner” highlights real world experiences to provide you the tools to maximize your efficiency while lowering your operating costs.

About Universal Networking Services LinkedIn “Data Center Critical Power and Cooling Group”:

Join UNS and industry peers on an open forum to discuss the following topics relating to data center “best practices”.  Key areas are:

– Power

– Cooling

– Racks- Security

– Management

– Fire Suppression

– Personnel

Submit your questions or comments on issues that affect your data center and have them answered by our Critical Facility Specialist.  We provide valuable insight to common problems that often plague modern data centers. Join the discussion at Data Center Critical Power and Cooling LinkedIn Group.

About Universal Networking Services Twitter:

Follow us on Twitter as we tweet/share industry related news, event postings, and company updates.

APC Data Center University

Data Center University™ (DCU) offers industry-leading education for IT professionals‚ facilities managers‚ engineers‚ designers‚ consultants‚ and anyone involved in the critical decisions and infrastructure planning of data centers worldwide. The changing nature of data centers‚ and the technology that impacts them‚ makes it even more critical that employees remain up to date on the current theories and best practices for issues around topics of power‚ cooling‚ management‚ security‚ and planning.

DCU provides a full curriculum of courses that educate and deliver up-to-the-minute information when and where you need it. Our online program is intended to be manageable and attainable‚ and with our Certification exam‚ you can quantify your learning and experience as a true data center professional!

Energy University by Schneider Electric

Schneider Electric, the global specialist in Energy Management has launched an e-learning website Energy University to provide the latest information and professional training on Energy Efficiency concepts and best practice! All in ONE Place – All in ONE site!

In addition to learning new energy saving ideas that directly contribute to the overall well-being of the earth; you will also become an even more valuable employee by contributing to the bottom line for your company. Learn something new today and apply the knowledge tomorrow. Become an Energy Efficiency Champion! Read more…

Don’s Corner: “Is Perimeter Cooling Dead?”

Posted by on March 6, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

“IS PERIMETER COOLING DEAD?”

Traditional data center “room” cooling is unable to accommodate the latest generation of high and variable density IT equipment resulting in cooling systems that are inefficient, unpredictable, and low in power density.  To address these problems row-oriented and rack-oriented cooling architectures have been developed.  Our presentation, “Is Perimeter Cooling Dead” examines and contrasts the 3 basic cooling approaches: room, row, and rack architectures. Each approach has an appropriate application but row-orientated cooling is emerging as the solution of choice for most next generation data centers. Next generation data centers demand the flexibility, predicability, scalability, reduced electrical power consumption, reduced TCO, and optimum availability that row and rack-oriented cooling architectures can provide. Additionally, the factors that gave rise to the establishment and use of the raised floor in the data center environment are presented.  For many applications the use of the raised floor is no longer justified or desirable. To learn more about this topic, “Is Perimeter Cooling Dead” is available for your organization via our unique “Breakfast and Learn” Educational Series For Data Center Professionals.  This series provides the education that will lay the critical found for your organization to run an efficient data center.  Please feel free to contact me for further information at d.melchert@apcdistributors.com or 918-760-8236.

The Advantages of Row and Rack-Oriented Cooling Architectures for Data Centers (White Paper #30) Overview:

Executive Summary:

Latest generation high density and variable density IT equipment create conditions that traditional data center room cooling was never intended to address, resulting in cooling systems that are inefficient, unpredictable, and low in power density. Row-oriented and rack-oriented cooling architectures have been developed to address these problems. This paper contrasts room, row, and rack architectures and shows why row- oriented cooling will emerge as the preferred solution for most next generation data centers.

Highlights:

  • Discuss the following cooling approaches:  room, row and rack-based cooling architectures.
  • Benefit comparison of cooling architectures:  challenges in agility, availability, lifecycle costs, serviceability, and manageability
  • Special issues:  capacity utilization, humidification, electrical efficiency, water near IT equipment, location and redundancy.
  • Elements of the raised floor and problems associated with using a raised floor.
  • Hurdles to eliminating the raised floor
  • Designing without a raised floor.

Conclusion:

The conventional legacy approach to data center cooling using room-oriented architecture has technical and practical limitations in next generation data centers. The need of next generation data centers to adapt to changing requirements, to reliably support high and variable power density, and to reduce electrical power consumption and other operating costs have directly led to the development of row and rack-oriented cooling architectures. These two architectures are more successful at addressing these needs, particularly at operating densities of 3 kW per rack or greater. The legacy room-oriented approach has served the industry well, and remains an effective and practical alternative for lower density installations and those applications where IT technology changes are minimal.

Row and rack-oriented cooling architecture provides the flexibility, predictability, scalability, reduced electrical power consumption, reduced TCO, and optimum availability that next- generations data centers require. Users should expect that many new product offerings from suppliers will utilize these approaches.

It is expected that many data centers will utilize a mixture of the three cooling architectures. Rack-oriented cooling will find application in situations where extreme densities, high granularity of deployment, or unstructured layout are the key drivers. Room-oriented cooling will remain an effective approach for low density applications and applications where change is infrequent. For most users with newer high density server technologies, row-oriented cooling will provide the best balance of high predictability, high power density, and adaptability, at the best overall TCO.

References:

Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms (White Paper#49) Overview:

Executive Summary:

Avoidable mistakes that are routinely made when installing cooling systems and racks in data centers or network rooms compromise availability and increase costs. These unintentional flaws create hot-spots, decrease fault tolerance, decrease efficiency, and reduce cooling capacity. Although facilities operators are often held accountable for cooling problems, many problems are actually caused by improper deployment of IT equipment outside of their control. This paper examines these typical mistakes, explains their principles, quantifies their impacts, and describes simple remedies.

Conclusion:

The air distribution system is a part of the data center that is not well understood, and facility operators and IT personnel often take actions involving airflow that have unintentional and adverse consequences to both availability and cost.

Flawed airflow implementation has not been a serious problem in the past, due to low power density in the data center. However, recent increases in power density are beginning to test the capacity of cooling systems and give rise to hot-spots and unexpected limitations of cooling capacity

Decisions such as facing all racks in the same direction are often made for cosmetic reasons to project image; but as users and customers become more educated they will conclude that people who do not implement airflow correctly are inexperienced, which is the opposite of the original intent.

Adopting a number of simple policies and providing a simple justification for them can achieve alignment between IT and Facilities staff resulting in maximum availability and optimized TCO.

References:

Don’s Corner: “Right-Sizing Versus Over-Sizing: Efficiency In The Data Center”

Posted by on February 29, 2012  |  No Comments

Don Melchert, Critical Facility Specialist

“RIGHT-SIZING VERSUS OVER-SIZING: EFFICIENCY IN THE DATA CENTER”

Forecasting and measuring the total cost of ownership (TCO) for Data Center Physical Infrastructure (DCPI) is essential for return-on-investment (ROI) analysis. Oversizing is the main contributor to excess TCO. Oversizing creates inefficiencies in the data center including excess capital cost, operating cost, and specifically energy cost. The average data center operator can achieve the highest return investment in relation to DCPI through rightsizing.  Right-sizing the DCPI system to the load is the key to optimizing TCO and has the most impact on DCPI electrical consumption.  Right-sizing can potentially eliminate up to 50% of the electrical bill in real-world scenarios.  For example, potential electricity cost savings for a typical 1 MW data center  has been shown to be $2,000,000 to $4,000,000 over a typical 10-year life of the facility.  Data center efficiency is key to controlling your energy costs  and should be a topic of significant importance to all data center operators.

The single largest avoidable cost associated with typical data center and network room infrastructure is oversizing.  To learn more about how you can prevent this unnecessary cost, “Right-Sizing Versus Over-Sizing: Efficiency In The Data Center” is available for your organization via our unique “Breakfast and Learn” Educational Series For Data Center Professionals.  This series provides the education that will lay the critical foundation for your organization to run an efficient data center. Please feel free to contact me for further information at d.melchert@apcdistributors.com or 918-760-8236. Additional resources pertaining to this topic are presented below for your reference.

Avoiding Costs from Oversizing Data Center and Network Room Infrastructure (White Paper #37) Overview:

Executive Summary:

The physical and power infrastructure of data centers and network rooms is typically oversized by more than 100%. Statistics related to oversizing are presented. The costs associated with oversizing are quantified. The fundamental reasons why oversizing occurs are discussed. An architecture and method for avoiding oversizing is described.

Highlights:

  • TCO cost drivers and opportunities to control TCO.
  • Information and statistics related to over-sizing.
  • Practical advantages of right-sizing.
  • Energy consumption reduction in DCPI equipment.
  • Examine key reasons why the industry is moving toward modular, scalable DCPI solutions.

Conclusion:

Data centers and network rooms are routinely oversized to three times their required capacity. Oversizing drives excessive capital and maintenance expenses, which are a substantial fraction of the overall lifecycle cost. Most of this excess cost can be recovered by implementing a method and architecture that can adapt to changing requirements in a cost- effective manner while at the same time providing high availability.

References:

Implementing Energy Efficient Data Centers (White Paper #114) Overview:

Executive Summary:

Electricity usage costs have become an increasing fraction of the total cost of ownership (TCO) for data centers. It is possible to dramatically reduce the electrical consumption of typical data centers through appropriate design of the data center physical infra- structure and through the design of the IT architecture. This paper explains how to quantify the electricity savings and provides examples of methods that can greatly reduce electrical power consumption.

Conclusion:

The cost of electricity for data centers is a substantial operating cost that can and should be managed. A data center designed for reduced power consumption also saves other costs such as capital and operating costs associated with power and cooling systems, as well as saving space.

Electrical consumption of existing data centers can be reduced through various low cost methods but primarily via migration to more energy efficient computing platforms. For new data centers, there are additional options both in the IT architecture and in the DCPI architec- ture that can gain much greater savings.

The electrical power consumption is typically shared evenly between the IT loads and DCPI devices. Any rational approach to reduction in electrical usage must treat the combined IT / DCPI design as a system in order to maximize the benefit.

Some equipment providers offer complete standardized data center designs specifically engineered for efficiency, and energy efficiency audit services are available for users desiring to reduce power consumption in existing data centers.

The cost savings opportunities have been shown to be very large yet the investment required to achieve them is small or even zero in some cases, when compared with legacy approaches to data center design.

References:

Determining Total Cost of Ownership for Data Center and Network Room Infrastructure (White Paper #6) Overview:

Executive Summary:

An improved method for measuring total cost of ownership (TCO) of data center and network room physical infrastructure and relating these costs to the overall Information Technology infrastructure is described, with examples. The cost drivers of TCO are quantified. The largest cost driver is shown to be unnecessary unabsorbed costs resulting from the oversizing of the infrastructure.

Conclusion:

Expressing TCO for data center and network room physical infrastructure on a per-rack basis normalizes the measurement of TCO, providing a metric that can be used to compare data centers/ network rooms and to compare different approaches to design.

The use of a TCO calculation tool and method was described. This tool permits the evaluation of cost control strategies, and can estimate TCO for specific installations.

The per rack lifetime TCO of a high availability data center is approximately $120K. The oversizing of infrastructure is a major contributor to this cost, and on average 30% cost savings can be obtained by implementing practical design techniques using modular scalable data center infrastructure.

References:

Don’s Corner: MGE EPS 3000- Does “End of Service” mean “End of the World”?

Posted by on January 9, 2012  |  No Comments

MGE EPS 3000- Does “End of Service” mean “End of the World”?

Don Melchert, Critical Facility Specialist

When the manufacturer sets an End of Life date for their product, it doesn’t mean the end of the world for your critical infrastructure.  On the contrary, more often than not, it’s a blessing in disguise.   Over the span of ten years, the way we store and transmit data has changed dramatically.  Consider this, is anyone out there still using a floppy disk?  Let’s hope not, but ten years ago, many of us still were.  Oddly enough, as is the case in many data centers, the same trusty, dusty UPS has stayed right there while the world around it changed every two to three years.

In November of 2000, the last MGE EPS 3000 UPS rolled off of the assembly line and was placed into service.  Current owners of the MGE product line, Schneider Electric, will no longer support the MGE EPS 3000 series UPS as of December 31, 2011. In essence, this means that if you currently hold a service contract, you won’t be able to purchase a renewal after the EOS date.  In addition, Schneider Electric will no longer stock replacement parts, nor will they continue to train their Field Service Engineers to respond to service calls for the EPS 3000.   On the one hand, for a Data Center Manager, not being able to have support on their UPS can cause a panic attack.   On the other hand, this may be exactly the sort of ammo needed to present at the next budget meeting that might loosen the budget enough to make some serious efficiency changes.  Changes, yes, but not just in terms of a decrease in energy consumption…that’s easy.  No, I’m talking about finally having the chance to change your power infrastructure in a way that allows a company to never again worry about an “End of Service panic attack.”

With today’s newer, hot-swappable designed UPS systems, such as the APC Symmetra UPS, our industry is finally in a position to break free of our dependency on high cost service contracts.  By design, the APC Symmetra UPS encourages a company’s I.T. Staff with minimal training or experience to make repairs without ever contacting their service provider in the first place.  After all, the UPS is smart enough to diagnose a fault and intelligent enough to know if that fault warrants an all out shutdown, or simply to issue a fault alarm to their owner.  Armed with the knowledge that their entire UPS, from battery modules to static switch assembly can be changed on-the-fly, Data Center Managers are able to scale down or in many cases, eliminate the cost of high-end service contracts all together in favor of newer, high efficiency UPS systems.  When faced with the thought of finding someone else to service an outdated EPS 3000 after the manufacturer won’t, do we really want to pay more to a third party service provider, only to be faced with an even longer mean time to repair?

The question is, what’s your time worth to you, and more importantly, what’s your time worth to your company?  An hour spent on the phone with the manufacturer struggling to get a UPS failure diagnosed and a repair tech scheduled could have been only a few minutes changing out a failed power module and returning to the myriad of projects already being managed by the I.T. Staff.  You know, the important stuff, like resetting a user’s password, again, for the third time this week…

Please feel free to contact me with any questions at d.melchert@apcdistributors.com or call 918-760-8236.

Don’s Corner: “Data Center Efficiency Assessment: Return On Investment (ROI) Summary”

Posted by on October 24, 2011  |  No Comments

Don Melchert, Critical Facility Specialist

Understanding where and how one’s energy is being wasted is essential in creating an energy savings plan. However, once armed with this information, it can be difficult and confusing when trying to determine your best Return On Investment (ROI) when choosing one solution over another. The majority of critical facility inspections and evaluations are performed while focusing on only the inner workings of the data center. The goal, for most, is to show how much energy has been saved by making any changes or corrections found during the inspection. Their entire approach is to locate key points in the data center where energy is being wasted or resources are being utilized inefficiently. However, the ability of a critical facility to remain online 100% of the time is the result of all of its Network Critical Physical Infrastructure components working together efficiently, not just what occurs solely in the data center.

A Critical Facility Energy Profile (CFEP) from UNS is different in that the evaluation is holistic, right from the start. A CFEP inspection actually begins from the moment the Critical Facility Specialist arrives at the site. The entire facility and its NCPI components, from its infrastructure to its security, are taken into consideration when establishing an efficiency benchmark. Locating points where efficiency can be improved results in changes that will immediately reduce costs for a business across multiple cost centers.

Any dollar value quoted for the ROI after performance of a CFEP will be false (low), as there is not sufficient data to accurately determine future savings gained from making a particular increase in efficiency. The cost savings is dynamic, as savings in one area increase savings in another and another. Additionally, the associated costs used to calculate a particular savings are constantly in flux due to external factors such as inflation and market value. However, if we dissect and combine particular sections, such as a decrease in energy consumption gained by improving air conditioning efficiency and a change from a legacy UPS to a hot-swappable modular solution, then we can show an approximate savings in dollars for those sections.

Today’s Data Center Managers are using new calculations and turning to solutions that create a stronger Power Usage Effectiveness (PUE) score. When evaluating one solution over another, solutions that can increase a company’s PUE while decreasing energy consumption should be chosen over long term solutions as the result is a quicker, and more substantial ROI.

Here is an actual ROI (Return On Investment) Summary Report provided for a recent customer. An ROI Summary Report is included with the CFEP evaluation and provided to the customer.  The customer in this report has been de-identified to the protect the security and privacy of our customer.

ROI (RETURN ON INVESTMENT) SUMMARY:

“At the current average cost of $.08 per kWH in the state of Virginia, the following calculations have been made to demonstrate your current energy usage compared with what your usage would be if all of the recommended solutions in this CFEP were implemented.

With your current deployment of legacy style data center physical infrastructure of 80kW and a single Liebert CRAC, The “XXXX”  Facility is spending approximately $463,000.00 to support each data center, for a total of $926,000.00, with a PUE of 8.26. PUE is the ratio of energy consumed against how well its being used and is an industry accepted standard in measuring efficiency.

If “XXXX” Facility were to implement all of the money saving solutions listed in the pages of this CFEP, and without decreasing the critical load in any way, the numbers change significantly. With the help of UNS, The “XXXX” Facility could become 37.5% more efficient, your PUE drops to 3.10, and would be spending only $175,000.00 per data center, or $350,000.00 to support both data centers annually. By making efficiency changes in your data center now, The “XXXX” Facility would save approximately $576,000 a year, or assuming the cost of a kilo-watt hour does not increase, $2,880,000 over a 5-year span.”

Learn More:

To learn more about our Data Center Efficiency Assessment Services please visit us at www.criticalpowerandcooling.com.  Please feel free to contact Don Melchert with any questions at d.melchert@apcdistributors.com or call 918-760-8236.


« Older Entries