I have a calendar alert goes off at 9:30 AM to “Reach out to Layer 8”, which is a little project I devised for myself. When the reminder fires, I open a file called “Friends.txt” that contains several people’s names, departments and phone numbers. I select a name from list and give them a call. This is usually a quick chat. I try to keep the conversation below 15 minutes, and we generally discuss overlap in our roles or projects that we have in common or new projects that the other might not have visibility in to. I end the call by saying something to the effect of “If you see anything weird, let me know”. This is how I know the status of my Layer 8 sensor array.
I am willing to say that you have these types of contacts in most of the departments in your organization and that you have worked security events with these contacts in the past. I would also say 15 min’s of chat time is nothing when compared to getting hours, if not days of lead-time on a security event.
Lead-time is crucial to success and an actual human, who just happens to be fully versed in the standard functioning of whatever program/system/process they are assigned to, is the exact person you want to give you a heads up. We all have a series of consoles and reports that are run on regular intervals. This will give us information about what has happened, things from the past. It will also give information or alerts on things or events that are known. What about the unknown, the events which are not pre-defined. How would you go about finding out about these events?
You start by looking at “normal”.
The Network Operation team knows what the standard load is on our network segments. They know what the standard ports and protocols used by the majority of the applications are. They know where Core servers “live.” The SQL DBA’s know where the SQL servers are as well. They also know what the transaction logs should look like and they know what instances are housed on each server and what Web servers they connect to. The Web folks know what systems are running what versions of IIS and what versions of Tomcat and Apache. You can see where this is going… They are perfectly versed in what normal looks like and they are perfectly placed to notice when something falls outside of normal.
“But shouldn’t these teams be reporting this information to you as part of your Incident Response Plan?” you ask. The answer is “Yes”. Yes they would, but how much time would be lost before that was the case? How much time would be spent troubleshooting the issue before an alarm is raised? Due-diligence by any team could include cycling services, re-applying patches or service packs. Replacing hardware is not unheard of. All of this takes time. A Layer 8 sensor call could only take a few minutes and your access to security/threat related intelligence is a resource that the other team may not have You can, and most likely already do, even function as a Layer 8 sensor for them as well, letting them know about vulnerabilities or exploits; the type of information that you are versed in.
So, what is the status of your Layer 8 sensor array? I highly recommend putting one in place.