The concept of Network Security can be pretty daunting. There are lots of rules and regulations that talk about security, how to enforce security, and what you should or shouldn’t do. Well, my brain is pretty full of stuff that is important, like passwords and account numbers, and not so important, like the name of Pink Floyd’s keyboard player, so I have to simplify everything I have to remember. I’ve had to do this for network security too.

Here are the six simple rules I use when I’m thinking about security. I based my rules off every regulation I’ve ever read, (And I’ve read too many!) and something I found online by Marcus Ranum and Fred Avolio called the Seven Tenants of Good Security.

1. Keep it Simple! – Simple is better than complex if the same result is achieved. The methods and mechanisms used to implement security, the way a device is managed and used, and the security paradigm embraced by a security policy should all be simple because simple is usually more transparent and if something breaks its normally easier and cheaper to fix.

2. Document your security processes! – Make a security policy. This is where you keep all your security rules that talk about password changes, money handling, etc. this will be your businesses security bible.

3. Make sure you have reports! – A security device should be configured to gather as much data as is possible and this data must be examined for compromises on a scheduled basis. A local business recently lost thousands of credit card numbers because they weren’t checking their security logs!

4. Make the user accountable! – User identification is vitally important if users are to be allowed to use security systems. Each user should have their own login and logins should never be shared. Most security breaches occur through user account hacks. If a violation does occur the issue can be tracked to the hacked account and the security hole can be closed.

5. Be Flexible! – A security device or process should be configurable to implement an organization’s security policy. It must also be flexible to change as the organization’s security policy changes.

6. Don’t be afraid to test REGULARLY! – The methods and algorithms used to implement security should be tested and reevaluated on a regular basis. If a policy needs to change or it isn’t working for you then this is the time to change it.

Questions? Comments? Let me hear them?

Its been two weeks and the Northeastern US is still trying to recover from the effects of Hurricane Sandy. The reports of damage and residual effects are well known by now. Large areas along the East Coast have been shutdown without power since the hurricane hit on 10/30. Communications networks have been disrupted. Fuel shortages are effecting all modes of transportation.
 
Are we in at risk? Even though we are over 100 miles inland, Hurricane Sandy was 1000 miles wide. A storm of that size would affect large parts of Texas up to several hundred miles inland with torrential rains, tornadoes, and winds.

OK, so if there were a big storm we need to be ready, What do we do? First the things you can do now. Make sure your key employees are briefed on your company’s emergency plans and on what you expect them to do during and after the storm. Make sure you have all current employee cell and land-line phone numbers. Also, document your critical functions and cross train your employees. If one of your employees isn’t available you still need to make sure that your key business processes still function. If you have employees who absolutely must be at a work then find a way to get them there and if you really want to have an effective and motivated employee, make sure you have a way to take care of their families.

Next, make sure you test your system backups. If you employ 3rd party off-site backups then make sure these are tested and you know how to recover your business from them. Also, make sure you have something and someplace to restore to. If you have alternate networks, then test them. Storms like Sandy don’t come out of nowhere so we’ll always have a lot of warning leading up to this storm. If you absolutely have to be functional then you should have a backup generator, and always remember to monitor your fuel levels and test the generator periodically.

Determine the likelihood that your office will be adversely affected. Never ever have a window in your server room. Make sure your server room isn’t in the basement or on the 1st floor.

Do you have any key cloud services? Even if a storm isn’t nearby you could still be affected. Where are these services located?  What does their disaster recovery plan look like? How will they recover if they are effected. To make sure that they are covered ask for a SAS 70 report or its replacement, SSAE16. These reports certify that your cloud vendor’s data center security is high and that they have a disaster recovery plan in place.  Don’t be afraid to ask them for a copy of it. This is not an unusual request. All banks and credit unions required it of their larger key vendors that are critical to the success of a disaster recovery.
If your business depends on the Internet to access data and send and receive messages then be sure to let your customers and vendors know that there may be communications issues. A large portion of the network traffic passes through major metro areas in the US. While these network centers are well protected against failure, that doesn’t mean that the networks they feed are equally well connected. You could find major network outages during and after the storm. Fortunately the Internet was built to bypass network damage even if it’s extensive but the chances are good that if this happens, you’ll be routed on slower more complex routes.

Whether you’re in the affected area or simply watching from afar, Hurricane Sandy can be a great lesson. First, it explains the necessity of geographical diversity when it comes to looking for cloud or co-location services. New York and Philadelphia or even Boston are just too close to each other to be safe from the same natural disaster. Second, Sandy demonstrates the necessity of backup power and off-site data backup as well as the necessity of testing both of them regularly. Finally, this event demonstrated that a catastrophe can happen anywhere, even where you are. You can’t afford to ignore the possibility.