Skip to main content

5 practices that need to be taken into account when migrating to a more modern data centre

 

I have previously written about Prestik, Scotch tape and barbed wire data centres. These problematic locations are usually legacy environments. Data centres have become highly consolidated and require technical and operational efficiencies. Thus, legacy practices need to be corrected and this article details a few of these.

  1. Legacy IT equipment in a data centre potentially does not have dual power supplies. Modern data centres supply resilience power feeds to maintain uptime and having only one power supply will have an adverse impact. Besides replacing the IT equipment, a temporary solution is to install a rack mounted ATS switch.
  2. Non-standard cabinets are not optimal for cooling. Cooling optimization is achieved using hot aisle containment. In this configuration racks are typically installed in pods where all the racks are of a similar form factor. Non-standard racks are difficult to install in these pods. It is possible to use butcher curtains or a similar type of paneling to achieve some sort of containment when the racks are migrated or reconfigured.
  3. In a legacy environment the network cablng is typically wired using copper to central row or even in some cases to the main core data centre switches themselves. As a result, legacy data centres typically have a disproportionate larger quantity of copper cabling. Newer network architectures are based on using fibre interconnections between racks as well as the core data centre switches and copper cabling is usually confined to within the racks themselves. Some chassis-based platforms even offload this connectivity to a backplane within the chassis, thereby further eliminating cabling.
  4. Legacy data centres typically utilize manual asset management as well as manual logs for controlling access. This is not efficient as the data centre scales and fully automated digital asset management systems need to be introduced.  The access control also needs to be based on a digital system and this includes visitors. A modern data centre that is not end-to-end digital for all operations is not feasible.
  5. There are new technologies available that introduce wireless Internet of Things sensors for monitoring and validating data centre operations. The modern data centre has dramatically more analytics from these sensors. As a minimum, these sensors increase the footprint of analytics such as temperature, power, presence and access but a factor of well over tenfold.

Bonus: Read the 15 best practices as recommended by a leading IT Consultancy here.

What other practices do you think are worth a mention on this list? Please comment below.

This article was originally published over at LinkedIn: 5 practices that need to be taken into account when migrating to a more modern data centre

Comments

Popular posts from this blog

Why Madge Networks, the token-ring company, went titsup

There I was shooting the breeze with an old mate. The conversation turned to why Madge Networks which I wrote about here went titsup. My analysis is that Madge Networks had a solution and decided to go out and find a problem. They deferred to more incorrect strategic technology choices. The truth of the matter is that when something goes titsup, its not because of one reason only, but a myriad of them all contributing to the negative consequence. There are the immediate or visual ones, which are underpinned by intermediate ones and finally after digging right down, there are the root causes. There is never a singular root cause for anything but I'll present my opinion and encourage everyone else to chip in. All of them together are more likely the reason the company went titsup. As far as technology brainfarts go there is no better example than Kodak . They invented the digital camera that killed them. However, they were so focused on milking people in their leg

Flawed "ITIL aligned"​ Incident Management

Many "ITIL aligned" service desk tools have flawed incident management. The reason is that incidents are logged with a time association and some related fields to type in some gobbledygook. The expanded incident life cycle is not enforced and as a result trending and problem management is not possible. Here is a fictitious log of an incident at PFS, a financial services company, which uses CGTSD, an “ITIL-aligned” service desk tool. Here is the log of an incident record from this system: Monday, 12 August: 09:03am (Bob, the service desk guy): Alice (customer in retail banking) phoned in. Logged an issue. Unable to assist over the phone (there goes our FCR), will escalate to second line. 09:04am (Bob, the service desk guy): Escalate the incident to Charles in second line support. 09:05am (Charles, technical support): Open incident. 09:05am (Charles, technical support): Delayed incident by 1 day. Tuesday, 13 August: 10:11am (Charles, technical support): Phoned Alice.

A checklist for troubleshooting network problems (22 things to catch)

  Assumptions! What is really wrong? Is it the network that is being blamed for something else? Fully describe and detail the issue . The mere act of writing it down, often clarifies matters. Kick the tyres and do a visual inspection. With Smartphones being readily available, take pictures. I once went to a factory where there was a problem. Upon inspection, the network equipment was covered in pigeon pooh! The chassis had rusted and the PCB boards were being affected by the stuff. No wonder there was a problem. In another example, which involved radio links. It is difficult with radio links to remotely troubleshoot alignment errors. (I can recall when a heavy storm blew some radio links out of alignment. Until we climbed onto the roof we never realised how strong the wind really was that day!) Cabling. Is the cable actually plugged in? Is it plugged into the correct location. Wear and tear on cabling can also not b