Many technology providers are releasing new modules for anomaly based detections. This is a great step forward to widen our detection net. However, these detection types produce a great number of false positives. One notable solution is Microsoft's Identity Protection tooling for detecting "Unfamiliar Sign-In Properties." Many organisations which I have spoken to, simply turn this off because they cannot handle the sheer volume.
In fact in the we took a 6 month time from 01/06/2023 and saw that we had 15,300 security events from Azure Identity Protection alone. Of which 14,700 were false positive. From the 15,300 we were able to use automation to reduce our workload to target 385 of the events. Of the 385, 179 were true positive. The remainder were classified as "Insufficient Information" or "Duplicate" for a variety of reasons.
Firstly, lets discuss anomaly-based (stochastic) detections and why they produce so many false positives in the first place. They very definition itself of "stochastic" from the dictionary is "having a random probability distribution or pattern that may be analysed statistically but may not be predicted precisely." One of the forms of anomaly-based detections leverages Machine Learning which could be described in terms of being stochastic. The results usually fall within a probability distribution. However, within this distribution you find yourself quickly overwhelmed with the amount of manual work involved to sift through and pinpoint any alert of use.
Tuning your machine learning for precision would seem like the sensible thing to do but you'll end up with False Negatives. In many cases like with Azure Identity Protection and other tools you cannot even tune the model yourself. However, by widening the scope to include all True Positives you'll end up with a large number of False Positives as well. From our technology day talk with myself and Leon Birk, Leon had created an excellent presentation depicting this issue.
In the below figure you'll see that in the first detection scope we have a low number of False Positives, but a high number of False Negatives out side the scope. In the second increased detection scope, the tuning would capture all the True Positives, however, you'd wind up with more False Positives.
The ultimate way to reduce the added workload from anomaly-based detections is with automation. That's how we get the number of 15,300 down to 385. If you consider the amount of work spent on these events you can consider it to be approximately 15 minutes each. From 14,915 events which we fully automated we saved approximately 223,725 minutes or 3,728 hours. In addition, to that our response times across all other events drastically decreased, since we spend much less time working through useless events.
Some of the automations or playbooks you could build can do very simplistic checks such as:
Check for MFA (Multi-Factor Authentication)
Check if the Device is Domain Joined
Check if the user typically signs in from this location
Overall the types of automations you could use in this case are quite simplistic and would reduce the number of False Positive events reaching your SOC analysts drastically.
Kommentare