Skip to main content

Top 25 network problems

The Top 25 problems as listed by Infobox here are:
  1. Configuration not saved
  2. Saved configurations don't match policy/best practice
  3. Blocked firewall ruleset, unused ACL entry
  4. Firewall connection count exceeded
  5. Link hog - someone downloading music or video
  6. Interface traffic congestion
  7. Link problems and stablity
  8. Environmental limits exceeded
  9. Memory utilization increasing
  10. Incorrect serial bandwidth setting
  11. No QoS
  12. QoS queue drops
  13. Route flaps
  14. OSPF recalculations high
  15. Poor VoIP quality
  16. Routing neighbour changes high
  17. OSPF area not connected to backbone
  18. Unidirectional traffic flow
  19. Router interface down
  20. Unstable to undefined root bridge
  21. Duplex mismatch
  22. Downstream switch
  23. Port in error/disabled state
  24. Unbalanced and unused etherchannels
  25. HSRP or VRRP peer not found
On the download link to the list I have my own network troubleshooting checklist.

Comments

Popular posts from this blog

Why Madge Networks, the token-ring company, went titsup

There I was shooting the breeze with an old mate. The conversation turned to why Madge Networks which I wrote about here went titsup. My analysis is that Madge Networks had a solution and decided to go out and find a problem. They deferred to more incorrect strategic technology choices. The truth of the matter is that when something goes titsup, its not because of one reason only, but a myriad of them all contributing to the negative consequence. There are the immediate or visual ones, which are underpinned by intermediate ones and finally after digging right down, there are the root causes. There is never a singular root cause for anything but I'll present my opinion and encourage everyone else to chip in. All of them together are more likely the reason the company went titsup. As far as technology brainfarts go there is no better example than Kodak . They invented the digital camera that killed them. However, they were so focused on milking people in their leg

Flawed "ITIL aligned"​ Incident Management

Many "ITIL aligned" service desk tools have flawed incident management. The reason is that incidents are logged with a time association and some related fields to type in some gobbledygook. The expanded incident life cycle is not enforced and as a result trending and problem management is not possible. Here is a fictitious log of an incident at PFS, a financial services company, which uses CGTSD, an “ITIL-aligned” service desk tool. Here is the log of an incident record from this system: Monday, 12 August: 09:03am (Bob, the service desk guy): Alice (customer in retail banking) phoned in. Logged an issue. Unable to assist over the phone (there goes our FCR), will escalate to second line. 09:04am (Bob, the service desk guy): Escalate the incident to Charles in second line support. 09:05am (Charles, technical support): Open incident. 09:05am (Charles, technical support): Delayed incident by 1 day. Tuesday, 13 August: 10:11am (Charles, technical support): Phoned Alice.

Updated: Articles by Ron Bartels published on iot for all

  These are articles that I published during the course of the past year on one of the popular international Internet of Things publishing sites, iot for all .  These are articles that I published during the course of the past year on one of the popular international Internet of Things publishing sites, iot for all . Improving Data Center Reliability With IoT Reliability and availability are essential to data centers. IoT can enable better issue tracking and data collection, leading to greater stability. Doing the Work Right in Data Centers With Checklists Data centers are complex. Modern economies rely upon their continuous operation. IoT solutions paired with this data center checklist can help! IoT Optimi