Skip to main content

The importance of downtime in problem management

Juniper Networks wrote an interesting white paper titled, What’s Behind Network Downtime? Proactive Steps to Reduce Human Error and Improve Availability of Networks. Downtime is the most important metric for problem management as it is part of the most crucial metric, TIME.
The paper states that according to an Infonetics Research study, large businesses lose an average of 3.6 percent in annual revenue due to network downtime each year. Another reference but using Gartner reseach is this article in Network World.
The Infonetics site has this release in 2006 about smaller businesses: "In a new study on network downtime, Infonetics Research found that medium businesses (101 to 1,000 employees) are losing an average of 1% of their annual revenue, or $867,000, to downtime. The study, The Costs of Downtime: North American Medium Businesses 2006, says that companies experience an average of nearly 140 hours of downtime every year, with 56% of that caused by pure outages."
When an outage occurs it is possible to calculate the cost of the associated downtime. Here is a web based downtime calculators.
H.L. Mencken made the following statement: "There is always an easy solution to every human problem—neat, plausible, and wrong." The fundamental truth behind this statement is that when the problem has occurred then "die koeël is al deur die kerk", which means the action has happened and nothing can be done to change it. It is prudent to learn lessons from problems and have in place mechanisms to prevent their future recurrence. Fixing human problems with human solutions is insanity (Rita Mae Brown: Insanity is doing the same thing, over and over again, but expecting different results.) Example: issuing disciplinary letters to data centre technicians for procedural faults caused by fatigue. Yes, the insanity is there!
The lesson is, crash responsibly. Coding Horror states: "I not only need to protect my users from my errors, I need to protect myself from my errors, too. That's why the first thing I do on any new project is set up an error handling framework. Errors are inevitable, but ignorance shouldn't be. If you know about the problems, you can fix them and respond to them." The error handling framework should be embedded in not only the software or code but throughout the whole system and solution. Many technology malign themselves with the byline that they are not box droppers but a services company. The core characteristic of a services company, is an error handling framework. Sadly, this is where all the 800 pound gorillas fall short!
One method for this error handling framework in services is ITIL's expanded incident lifecycle. Straight from the book, ITIL v3, Continual Service Improvement: "(Availability Management) Detailed stages in the Lifecycle of an Incident. The stages are Detection, Diagnosis, Repair, Recovery, Restoration. The Expanded Incident Lifecycle is used to help understand all contributions to the Impact of Incidents and to Plan how these could be controlled or reduced."
The diligent recording of times during a major incident enables a company to identify causes that can be proactively addressed. These translate into reduced downtime, which equates to moolah. There are many possible causes of extended downtime periods and these include:
  • Long detection times or even misses.
  • Inappropriate diagnostics.
  • Logistic issues delaying repair.
  • Slow recovery, like having to rebuild from scratch as there is no know last good configuration.
  • Slow return to service even though the device is recovered.
  • No workarounds being available or documented.
Diligence around timings in the expanded incident life cycle is of crucial importance in analysing downtime.

Read further about the importance of investigating downtime in this article here.


Comments

Popular posts from this blog

easywall - Web interface for easy use of the IPTables firewall on Linux systems written in Python3.

Firewalls are becoming increasingly important in today’s world. Hackers and automated scripts are constantly trying to invade your system and use it for Bitcoin mining, botnets or other things. To prevent these attacks, you can use a firewall on your system. IPTables is the strongest firewall in Linux because it can filter packets in the kernel before they reach the application. Using IPTables is not very easy for Linux beginners. We have created easywall - the simple IPTables web interface . The focus of the software is on easy installation and use. Access this neat software over on github: easywall

No Scrubs: The Architecture That Made Unmetered Mitigation Possible

When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach. Read the post of at Cloudflare's blog: N o Scrubs: The Architecture That Made Unmetered Mitigation Possible

Should You Buy A UniFi Dream Machine, USG, USG Pro, or Dream Machine Pro?