Data Loss Prevention

 View Only

Stress testing of the DLP environment while improving detection capabilities 

Nov 19, 2015 11:43 AM

Most of us run the DLP Solution under different banners. Most of us call it the Data Loss Prevention program while some call it the Malicious insider Program or even the Data Breach Prevent program as well. Knowing the dynamics of this trade, most of the time we are into an arm wrestling match with the Information Security Leadership, the steering committees and the auditor to do more in terms of detection & monitoring of data leaving the organization. Technically, DLP always allows the option to simply create more policies & even to add detection capabilities using EDMs, IDMs, VMLs and DGMs - however there are no benchmarks or best practices in defining, to what extent one could proceed with this approach overall. More advanced hardware is a solution in some cases but not always as we know.

Firstly, with a collective agreement that -

  • there are multiple factors that decide the stress taking capability DLP could take (below are a few prominent ones - that I can re-collect):
    • SMTP traffic/second
    • web proxy traffic/second
    • total number of EDMs and its size
    • total number of IDMs and its size
    • total number of VMLs and its size
    • total number of DGMs and its size
    • number of policies with a 2-tier detection on endpoint servers
    • EDMs, DGMs, VMLs and IDMs applied on endpoint servers
    • avg. data transferred to shared drives, printer, external storage, optical drives, etc.
    • Internal Network speed (with respect to discover targets)
    • total number of web prevents
    • total number of SMTP prevents
    • scanning dept and configuration on the DAR program
    • total number of policies
    • number of CPU cores on SMTP prevent servers
    • number of CPU cores on other servers
    • RAM, Disk I/O (if virtual), etc. on all DLP Servers
    • 1/2/3 tier architecture
  • each architecture is unique and never the same in terms of the above parameters
  • there are peak and off-peak hours in each environment
  • target data being scanned is always different and may or may not be consistent in terms of:
    • large data vs. small data sizes
    • encrypted emails
    • zipped content
    • very large attachments
    • hard to read/detect content
    • corrupt content
  • there are periods of maintenance that various systems would undergo:
    • Oracle maintenance
    • Server OS Patching
    • System Downtimes
    • Mid-night cleanup tasks

I'm trying to cover all aspects that we need to consider - in order to be able to conclude whether a certain change (additional capability/scanning/new policy/more exceptions/etc.) could be performed and whether the environment would be able to sustain it or not. There is a lot to cover and I may have missed a few aspects and I'd eagerly await your suggestions in the form of comments to this article. However my goal here is to highlight a few significant aspects and to reiterate the fact that there cannot be a standard which could apply to all environments.

Having said that - then how do we perform such an assessment and revert back to stakeholder whether or not we may be able to bring a certain proposed enhancement to the DLP architecture in terms of 'added detection'?

The answer to the above question is not straight-forward and relativity plays a great role in accomplishing this task overall.

The approach would again differ if you have Test -> DEV -> Prod environment for the policy lifecycle. If yes, it would logically be to Test, Dev then Prod however if there is simply one enforce server and no test environment whatsoever - we could pilot the feasibility check exercise in a phased manner. In both the situations, we could fetch a best, avg. and a worst case situation. Maybe a server with max. endpoints of a gateway with highest SMTP traffic, mid and low. Then we could implement the change only on the low category first - validate the change via the mechanism listed in the last few paragraphs of this article then move to avg. and then towards high (with due diligence/change windows/etc.)

Test Metrics to validate results:

  • Before Test begins
    • For Data in Use (Endpoint) - Use the logdump tool to capture edpa_ext1.log and find out detection timings/time-outs/etc.
    • For Data at Rest (Discover) - Capture the scan statistics and note the time required to run a specific sample data
    • Data in Motion
      • SMTP - Capture the requestprocessor.log and note the existing rtime=0.22s dtime=0s mtime=0.22s
      • Web Prevent - Capture the filereader.log and note the existing detection time needed for respmod, reqmod, etc.
         
  • After Test is complete
    • For Data in Use (Endpoint) - Use the logdump tool to capture edpa_ext1.log and compare the detection timings/time-outs/etc. with the previous capture before change
    • For Data at Rest (Discover) - Capture the scan statistics again and compare the time with the previous capture (before executing policy change)
    • Data in Motion
      • SMTP - Capture the requestprocessor.log and compare the rtime=?.??s dtime=?s mtime=?.??s
      • Web Prevent - Capture the filereader.log and compare the new detection time needed for respmod, reqmod, etc.

If the comparison portrays feasibility, proceed with a mass change - else submit the above statistics stating architecture's inability to contain the new enhancement in DLP detection.

Let me know what you think after reading. The topic in itself is so vast, I'm eager to know more from people reading and ones been through such experience before.

Statistics
0 Favorited
9 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Related Entries and Links

No Related Resource entered.