Jump to content

Search the Community

Showing results for tags 'detection engineering'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 1 result

  1. This blog series was written jointly with Amine Besson, Principal Cyber Engineer, Behemoth CyberDefence and one more anonymous collaborator. This blog involved one more anonymous contributor. Testing the pens... In this blog (#8 in the series), we will take a fairly shallow look at testing in detection engineering (a deep look probably will require a book). Detection Engineering is Painful — and It Shouldn’t Be (Part 1) Detection Engineering and SOC Scalability Challenges (Part 2) Build for Detection Engineering, and Alerting Will Improve (Part 3) Focus Threat Intel Capabilities at Detection Engineering (Part 4) Frameworks for DE-Friendly CTI (Part 5) Cooking Intelligent Detections from Threat Intelligence (Part 6) Blueprint for Threat Intel to Detection Flow (Part 7) If we do detection as code, and we engineer detections, where is the QA? Where is the testing? Depending who you ask, you may receive a very different opinion of what detection testing is. Three key buckets of approaches stand out: Unit Testing: Boring, Eh? This approach is where we do a simple, regular stimulus (command line execution, configuration change, some network activity, etc…) to check that detections still trigger. This is typically more oriented toward ensuring that the log ingestion pipeline, alerts, SIEM data model etc. are all integrated as expected (a secret: often they are not!). This basic approach reduces the chance of a bad surprise down the line, but does not make sure that all your detection is effective or “good.” This “micro testing” simply confirms that the detection rule follows the developer intent (e.g. this should trigger on Windows event 7946 with these parameters and it does do that in our environment today). Today, you say? What about tomorrow or the day after? Indeed, this type of testing benefits the most from repetition and automation. If you run a SIEM or even EDR, you know your “friends” in IT would often change things and — oh horror! — not tell you. So, if you commit to the idea of unit testing detections, you commit to that being continuous. Another dimension here: reality or simulation? Do you trigger the activity (very obviously impossible or insanely risky in some cases) or do you inject the logs into your SIEM? The answer is of course “yes.” Don’t assign an Okta admin role to a random user to see if your SIEM gets it….duh. The choices really are: Trigger the activity in production (if safe) Trigger the activity in a test environment that’s shipping logs to your SIEM for ingestion Inject logs collected after such activity into a SIEM intake (naturally, they won’t test the log ingestion from into your SIEM from production systems, only what happens after) Naturally, you need a workable way to separate test detections and test detection data from the real ones, especially for longer term analytics and reporting (this can easily be filed under “easier said than done” in some organizations, but at least don’t open tickets based on test alerts, will ya?). For example, this tool on GitHub provides the capability to test a YARA-L (used by Chronicle SIEM) rule by running it over logs that have been ingested and returning any matches (successful detections) for the detection engineer to analyze (see this presentation and this blog for additional details). Adversary Emulation The emulation is where intel is being parsed from the red teamer perspective to execute a similar attack on a test environment, and allows the DE to correctly understand detection opportunities. This process empowers detection engineers (DEs) to grasp detection possibilities at a deeper level, generating valuable datasets for the construction of highly tailored, environment-specific detections. It allows us to build a data set which can then be further analyzed to build detections in the most adequate manner for the environment. This is a Threat Oriented strategy which allows to build detections with more certainty. The focus shifts from asking “will this isolated string in a packet trigger an alert?” to the more comprehensive question, “will this adversarial activity, or variations of it, be flagged within my unique network or system?” For example, adversary emulation can involve mimicking the tactics, techniques, and procedures (TTPs) of a notorious ransomware group. This might include emulating initial access methods, lateral movement, data exfiltration techniques, and the final encryption stage. Similarly, an adversary emulation exercise might replicate the steps a state-sponsored threat actor employs to infiltrate a software vendor, taint a product update, and then pivot from the compromised update to target the vendor’s customers. What shows up in logs? What rules can be written? Breach and Attack Simulation (BAS) tooling provides some value on this [or MSV that works well, yet refuses to admit that it is a BAS :)] Red to Purple Teaming Finally, purple teaming (OK, one definition of such, at least) where a red team periodically executes attack scenarios, and monitor the outcomes in collaboration with SOC (including whatever relevant D&R and DE teams) what went well (i.e. triggered detections) and what didn’t (lack of detection, too many alerts, or detections didn’t trigger as expected). This Result Oriented approach is the most effective as checking if the SOC function as a whole is efficient, providing clear pointers for improvement to DE teams, if the attack scenario is meaningful. It sounds like a bit of a letdown, but the detailed analysis of this goes far outside of the scope of our little detection engineering series. All three types of threat detection testing are important for ensuring that a detection system is effective and comprehensive, and also “future-proof” to an extent possible. By regularly testing the detection system, organizations can identify and fix any weaknesses, and ensure that they are protected from the latest threats. Testing to Retesting Testing is great, but things change. If what you log changes, your detections may stop working or start working in a set of new, unusual ways :-). We do cover this in our unit testing section, but it — with changes — applies to all testing. This is IT, so everything decays, drifts, rots and gets unmoored from its binding. Rules that were robust start to produce ‘false alarms’ for ‘mysterious reasons’, things that triggered 1/month start triggering 3/minute… Thus, a reminder: all testing needs to be continuous (well, periodic, really) and if you plan to test, you need to retest. Scope This probably should be first, but hey.. I am getting to this, ain’t I? What detections to test? Some assume that testing is part of detection engineering, but no! Testing is part of detection, period. You do want to test vendor detections, customized/tuned directions and of course those you wrote before your morning coffee. I suspect there may be exceptions to this (Will some vendor offer guaranteed-to-work, curated detections? Probably, yes!) DE team should handle “Unit Testing” as part of their lifecycle, and at least plan it in their roadmap. Next, “Adversary Emulation” approach allows DE to better craft detections — requiring strong intel, and a tight collaboration between the red team (or equivalent), Detection Engineering and likely Threat Intel team, if any. Should capacity be available, it is strongly recommended to structure processes to support this function, initiated, lead and guardrail by the TI analyst who produced the knowledge item. At any rate, any detection testing data is useful for TI purposes: it allows to clearly report on the Detective posture of the organization in face of threats, not only in the wild but also of brand new ones. Detection Posture measurement is key to improving maturity in a threat-intel driven manner, as prioritization becomes more fluid and meaningful. Now that we have a stronger understanding of plugging TI processes into DE, one last piece of the puzzle is missing: from a strategy standpoint, how to start steering the SOC into a new threat-driven, detection content focused unit? How to start building project management processes? This is coming next! Previous blog posts of this series: Detection Engineering is Painful — and It Shouldn’t Be (Part 1) Detection Engineering and SOC Scalability Challenges (Part 2) Build for Detection Engineering, and Alerting Will Improve (Part 3) Focus Threat Intel Capabilities at Detection Engineering (Part 4) Frameworks for DE-Friendly CTI (Part 5) Cooking Intelligent Detections from Threat Intelligence (Part 6) Blueprint for Threat Intel to Detection Flow (Part 7) Testing in Detection Engineering (Part 8) was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story. The post Testing in Detection Engineering (Part 8) appeared first on Security Boulevard. View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...