Certain DLP policies are likely to generate a huge extent of False Positives. The below described approach shall be helpful for Implementing such policies. This approach has been considered keeping three factors in mind:
The Initially created policy may be leveraged for Monitoring purposes (eg. “<Policy name>-Monitor”). The incidents generated from this policy may be analyzed for identifying keyword sets of ‘False Positive’ and ‘False Negative’ keywords.
The identified keyword sets may then be added to the exception list of the “-Monitor” policy.
The keyword sets identified in the Monitor phase, may be leveraged for creating a Validation policy, (eg. “<Policy name>-Validate”), with two rules.
Example:
The Incidents generated from these rules may be analyzed for validating the keywords, ie:
Note: The Validation policies may be deleted at a later stage along with its associated incidents.
Once Validated, the ‘False Negative’ keyword sets may be leveraged as rules for creation of the Final policy (in Prevent mode).
The ‘False Positive’ keyword sets may be leveraged as exceptions, if required.
What are you seeing for incidents? Too much information, such as false-positives? Or, not enough information, such as, you see a detection not enough to qualify it?
Hello,
Can i get some references for , how to fine tune already implemented policies ?
This is just perfect. Awesome. Wise.
Well-established approach.
Thank for the breif explaination it would be great if you share your exp more about DLF