Skip to main content

Firewall's best practice holy cows

Kevin Beaver has written a best practice firewall document. He doesn't mention the holy cows in the list. The holy cows are:
  • The firewall is security. It is all about the perimeter.
  • Take your time to approve each firewall rule set, as this is risk mitigation. There is no need for a set of standard changes, as each change needs to be viewed separately. 
  • Don't document. This is insecure as someone will steal it.
  • Don't use virtual firewalls. They are unreliable and more vulnerable.
  • Don't use VLANs. They are insecure and leak.
  • Always use two firewalls from different vendors in a cascaded installation. Nothing will ever go wrong twice but there will be at least two salesman earning commission.
  • Don't allow any UDP or ICMP (even for something as useful as network management.) Out of site is out of mind.
  • Don't have geographical fail over's connected at layer 2. Layer 3 is always more secure.
  • Don't allow dynamic routing (or for that matter any network related function.) A firewall is not a router.
  • The only way to secure Internet Browsers is to use a forward proxy. Nothing else can scrub traffic.
  • Cloning MACs on different firewalls makes them highly available.
My opinion about these best practice holy cows is that they should be minced into hamburger patties. These holy cows are not part of any risk management methodology. I don't know from where these holy cows originate (the source is protected by holy cow number three!) What I also don't know is who or what determined them as "best"?
I know of no major incident that has occurred where one of these holy cows would have been a suitable countermeasure. In reality, these have often been part of the causes of major incidents (which is defined as an incident of severe negative business consequence) or needlessly extended resolution times in the expanded incident life cycle.
Are vendors who provide equipment that don't follow these "best practices", delivering "bad practices?" If we look at virtual firewalls it is my experience that it is easier to administer 10 firewalls with 20 rules each, than to manage 1 firewall with 200 rules. Virtual firewalls make reviewing rule sets easier, and thus by definition are a better risk mitigation.
 
PS; The MAC cloners are also prime bullsh#tters.
 

Comments

Popular posts from this blog

Why Madge Networks, the token-ring company, went titsup

There I was shooting the breeze with an old mate. The conversation turned to why Madge Networks which I wrote about here went titsup. My analysis is that Madge Networks had a solution and decided to go out and find a problem. They deferred to more incorrect strategic technology choices. The truth of the matter is that when something goes titsup, its not because of one reason only, but a myriad of them all contributing to the negative consequence. There are the immediate or visual ones, which are underpinned by intermediate ones and finally after digging right down, there are the root causes. There is never a singular root cause for anything but I'll present my opinion and encourage everyone else to chip in. All of them together are more likely the reason the company went titsup. As far as technology brainfarts go there is no better example than Kodak . They invented the digital camera that killed them. However, they were so focused on milking people in their leg

Flawed "ITIL aligned"​ Incident Management

Many "ITIL aligned" service desk tools have flawed incident management. The reason is that incidents are logged with a time association and some related fields to type in some gobbledygook. The expanded incident life cycle is not enforced and as a result trending and problem management is not possible. Here is a fictitious log of an incident at PFS, a financial services company, which uses CGTSD, an “ITIL-aligned” service desk tool. Here is the log of an incident record from this system: Monday, 12 August: 09:03am (Bob, the service desk guy): Alice (customer in retail banking) phoned in. Logged an issue. Unable to assist over the phone (there goes our FCR), will escalate to second line. 09:04am (Bob, the service desk guy): Escalate the incident to Charles in second line support. 09:05am (Charles, technical support): Open incident. 09:05am (Charles, technical support): Delayed incident by 1 day. Tuesday, 13 August: 10:11am (Charles, technical support): Phoned Alice.

A checklist for troubleshooting network problems (22 things to catch)

  Assumptions! What is really wrong? Is it the network that is being blamed for something else? Fully describe and detail the issue . The mere act of writing it down, often clarifies matters. Kick the tyres and do a visual inspection. With Smartphones being readily available, take pictures. I once went to a factory where there was a problem. Upon inspection, the network equipment was covered in pigeon pooh! The chassis had rusted and the PCB boards were being affected by the stuff. No wonder there was a problem. In another example, which involved radio links. It is difficult with radio links to remotely troubleshoot alignment errors. (I can recall when a heavy storm blew some radio links out of alignment. Until we climbed onto the roof we never realised how strong the wind really was that day!) Cabling. Is the cable actually plugged in? Is it plugged into the correct location. Wear and tear on cabling can also not b