- Testing of data centre. Emergency shutdown test and powerup from complete light out.
- Consolidate servers using virtualisation.
- Power-supply efficiencies for servers
- Using networked storage can also reduce energy costs.
- Check for airflow blockages under the floor
- Leaks in the racks, which will drive up the need for airflow.Insert blanking plates, make sure those you already have are in the right place.
- Consider raising the temperature a few degrees. If the weather is cold outside, design air-conditioning systems that can take advantage of external air.
- Use variable-speed fans. Most air-conditioning fans run at 100 percent cycle and have one speed, but a dynamic fan can use temperature sensors to increase and decrease fan speeds as needed. UPSs are often over-sized and older models may not be designed to run efficiently for low utilisation rates.
- Store data on tape or offline wherever possible.
- Motion detectors to turn off lighting when nobody is working, and recycled water collection systems for backup cooling.
- Perform a health check before embarking upon expensive upgrades to the data centre to deal with cooling problems, certain checks should be carried out to identify potential flaws in the cooling infrastructure. These checks will determine the health of the data centre in order to avoid temperature-related IT equipment failure. They can also be used to evaluate the availability of adequate cooling capacity for the future. The current status should be reported and a baseline established to ensure that subsequent corrective actions result in improvements. A cooling system checkup should include the following items: maximum cooling capacity; CRAC (computer room air conditioning) units; chiller water/ condenser loop; room temperatures; rack temperatures; tile air velocity; condition of sub floors; airflow within racks; and aisle and floor tile arrangement.
- Initiate a cooling system maintenance schedule. Regular servicing and preventive maintenance is essential to keeping the data centre operating at peak performance. If the system has not been serviced for some time then this should be initiated immediately. A regular maintenance regime should be implemented to meet the recommended guidelines of the manufacturers of the cooling components.
- Install blanking panels and implement a cable maintenace schedule. Unused vertical space in rack enclosures causes the hot exhaust from equipment to take a “shortcut” back to the equipment’s intake. This unrestricted recycling of hot air means that equipment heats up unnecessarily. The installation of blanking panels prevents cooled air from bypassing the server intakes and stops hot air from recycling.Airflow within the rack is also affected by unstructured cabling arrangements, which can restrict the exhaust air from IT equipment. Unnecessary or unused cabling should be removed, data cables should be cut to the right length and patch panels used where appropriate. Power to the equipment should be fed from rack-mounted PDUs with cords cut to the proper length.
- Remove under-floor obstructions and seal the floor in data centres with a raised floor, the sub floor is used as a plenum, or duct, to provide a path for the cool air to travel from the CRAC units to the vented floor tiles (perforated tiles or floor grilles) located at the front of the racks. This sub floor is often used to carry other services such as power, cooling pipes, network cabling and, in some cases, water and/or fire detection and extinguishing systems.During the data centre design phase, design engineers will specify the floor depth sufficient to deliver air to the vented tiles at the required flow rate. Subsequent addition of racks and servers will result in the installation of more power and network cabling. Often, when servers and racks are moved or replaced, the old cabling is abandoned beneath the floor. Air distribution enhancement devices can alleviate the problem of restricted airflow. Overhead cabling can ensure that this problem never even occurs. If cabling is run beneath the floor, sufficient space must be provided to allow the airflow required for proper cooling. Ideally, subfloor cable trays should be run at an upper level beneath the floor to keep the lower space free to act as the cooling plenum.Missing floor tiles should be replaced and tiles reseated to remove any gaps. Cable cut-outs in the floor cause the majority of unwanted air leakages and should be sealed around the cables. Tiles with unused cutouts should be replaced with full tiles and tiles adjacent to empty or missing racks should also be replaced with full tiles.
- Separate high density racks when high density racks are clustered together, most cooling systems become ineffective. Distributing these racks across the entire floor area alleviates this problem. The fundamental reason why spreading out high density loads is effective is because isolated high power racks can effectively “borrow” under-utilised cooling capacity from neighbouring racks. However, this effect cannot work if the neighboring racks are already using all the capacity available to them.
- Implement a hot aisle/ cold aisle environment, where cold aisles contain the vented floor tiles and racks are arranged so that all server fronts (intakes) face a cold aisle. Hot air exhausts into the hot aisle, which contains no vented floor tiles.
- Align air handling units with hot aisles to optimise cooling efficiency. With a raised-floor cooling system it is more important to align CRAC units with the air return path (hot aisles) than with the sub floor air supply path (cold aisles).
- Manage floor vents. Rack airflow and rack layout are key elements in maximising cooling performance. However, improper location of floor vents can cause cooling air to mix with hot exhaust air before reaching the load equipment, giving rise to the cascade of performance problems and costs described earlier. Poorly located delivery or return vents are very common and can negate nearly all the benefits of a hot-aisle/cold-aisle design. The key to air delivery vents is to place them as closely as possible to equipment intakes, which maximises keeping cool air in the cold aisles.
- Install inflow-assisting devices where the overall average cooling capacity is adequate but hot spots have been created by the use of high density racks, cooling loads within racks can be improved by the retrofitting of fan-assisted devices that improve airflow, and can increase cooling capacity to between 3 kW and 8 kW per rack.
- Install self-contained high density devices. As power and cooling requirements within a rack rise above 8 kW, it becomes increasingly difficult to deliver a consistent stream of cool air to the intakes of all the servers when relying on airflow from vented floor tiles. In extreme high density situations (greater than 8 kW per rack), cool air needs to be directly supplied to all levels of the rack - not from the top or the bottom - to ensure an even temperature at all levels. Self contained, high density cooling systems that accomplish this are designed to be installed in a data centre without impacting any other racks or existing cooling systems. Such systems are thermally “room neutral” and will either take cool air from the room and discharge air back into the room at the same temperature, or use their own airflow within a sealed cabinet.
Firewalls are becoming increasingly important in today’s world. Hackers and automated scripts are constantly trying to invade your system and use it for Bitcoin mining, botnets or other things. To prevent these attacks, you can use a firewall on your system. IPTables is the strongest firewall in Linux because it can filter packets in the kernel before they reach the application. Using IPTables is not very easy for Linux beginners. We have created easywall - the simple IPTables web interface . The focus of the software is on easy installation and use. Access this neat software over on github: easywall
Post a Comment