My 2025 Detection Philosophy and the Pursuit of Immutable Artifacts

If a log falls in the SIEM, does it generate an alert?
2025 has been an interesting year for me so far. Not too long ago, I joined a startup called CardinalOps, which gave me broad visibility into how organizations across the globe approach detection — some are highly mature, others less so. To keep up with the pace of startup life, I had to evolve my mindset, shedding older paradigms in favor of a more agile, adaptive detection philosophy.
Let’s start with a classic thought experiment: If a tree falls in a forest and no one is around to hear it, does it make a sound? In cybersecurity, we ask a similar question: If a log falls in the SIEM, does it generate an alert? And if it does — who’s listening? Who cares? Does it even matter?
After years of drowning in telemetry, chasing ghost alerts, and fine-tuning SIEM rules, I’ve come to believe detection isn’t about more data — it’s about the right data. (And no, I’m not trying to reinvent the wheel here.) My detection mindset in 2025 has shifted from quantity to quality, from reactive alerts to intentional signal design. This post is a reflection on that journey — what I’ve learned, what I’ve left behind, and how I now build detection with less noise, more context, and a sharper focus on what really matters.
What makes a good detection?
While this is a topic that is spoken about quite often (such as here, or here), I’d like to give my two cents on it.
So — what does make a detection rule “good”?
To me, a strong detection strikes a balance between precision and practicality. It’s not too broad (where it floods your SOC with noise), and not too atomic (where it misses broader malicious behavior). It’s forward-looking, resilient, and designed with real-world performance in mind — particularly in your SIEM of choice (I believe that SIEM performance never gets enough attention in detection discussions).
Here’s what I believe separates a solid detection from the rest:
- Balanced Scope: Neither overly specific nor too general. It captures intent, not just artifacts.
- Resilience by Design: Built to withstand tool variations, obfuscation, and minor changes in attacker tradecraft.
- SIEM-Aware Engineering: Designed with cost, cardinality, and performance in mind — because a well-crafted detection is worthless if it overloads your pipeline.
- Technique-Focused: Anchored in the behavioral essence of a technique (more more known as “Immutable Artifacts.”) — not just chasing volatile IOCs or superficial strings.
A strong detection isn’t just a rule — it’s a reflection of intentional design. After years of tuning, breaking, and rebuilding detection logic, here’s the framework I’ve come to trust in 2025.
Let’s take a common scenario: detecting service creation on Windows.
Traditionally, you’d look for Event ID 4697 in the Security log, or Event ID 7045 in the System log — a solid starting point. But is that the only way to detect this behavior?
Short answer: not even close.
Enter the SpecterOps perspective. Their detection framework emphasizes anchoring detections on immutable behaviors — those that must occur regardless of the tool or method used. In this context, a tool is simply a representation of a technique, but the technique itself remains constant.
Take service creation: no matter the tool — PowerShell, sc.exe, WinRM, or a custom binary — every one of them will generate the appropriate registry key. That’s your Immutable Artifact. That’s what lasts.
Sure, this introduces challenges: enabling SACL auditing, managing increased storage, and handling more verbose logs. But resilient detection often comes with a price — and it’s worth paying if you want to detect the behavior, not the wrapper.
What are Immutable Artifacts?
In detection engineering, Immutable Artifacts are the golden signals — the unchangeable traces an attacker leaves behind, no matter what tool, technique, or obfuscation they use.
Think of them as the fingerprints of behavior, not the clothes the attacker wears.
Here’s the core idea:
No matter how an attacker performs a technique, there are certain artifacts they must generate to succeed. These are consistent, unavoidable (for now), and ideal for durable detections.
For example, if an attacker wants to enable RDP on Windows, to the best of my knowledge and based on examples from the wild, he MUST set the registry key HKLM\System\CurrentControlSet\Control\Terminal Server\fDenyTSConnections to 0, no matter what tool or technique he is using.
By contrast, indicators like process names, command-line flags, or file hashes are mutable. They can be renamed, encoded, swapped, or evaded. But the underlying action — the behavioral truth — can’t be hidden forever.
When building detections, target what the attacker can’t change.
We should aspire to write detections that find Immutable Artifacts — the ones that the attacker is going to leave behind 100% of the time, and that are not subject to obfuscation, source code modifications, or anything that the attacker controls, ideally.
For example, Metasploit’s implement this code to enable RDP:
And in order to enable RDP via WMIC, the following can be done:
In both of the above, the following is generated:
That’s your Immutable Artifact. Whether the attacker uses reg.exe, PowerShell, or sneaky C2 implants, this registry value has to be flipped.
How to write better detections?
“Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win” (John Lambert).
This quote stuck with me. It reframed how I approach detection entirely. My view is heavily inspired by the posts from SpecterOps, primarily THIS one.
We tend to write detections based on isolated signals — specific process names, file paths, known command-line flags. But attackers aren’t working off a checklist. They’re navigating a graph of possibilities, finding new paths toward the same objective.
To outpace them, we need to shift focus: from detecting how they do something, to what must happen no matter the method. This is where Immutable Artifacts come into play — again and again.
In Jared’s post, he mentions Alfred Korzybski’s 1931 paper, “A Non-Aristotelian System and Its Necessity for Rigour in Mathematics and Physics”, that poses that “the map is not the territory”. If our map does not share a similar structure to reality, then any conclusions we draw from the map will, by definition, probably be incorrect. Now what?
Plato, in his “Sophist”, explores the concept of “form” through the analysis of “particulars”. In our case, the “form” represents the tradecraft (technique or behavior).
The “particular” is the tool or sample that implements the technique. Plato proposed that we can understand the “form” by understanding what the “particulars” have in common — “sameness”.
An attacker is doing 2 + 2 to get 4 and that’s a problem so you block it. Then they do 5–1. (@clift_m)
Additionally, I believe security professionals need to understand that while some of us are just born with talent and mindset suited towards this area, progressive and persistent learning is still king. Don’t get discouraged because you can’t immediately jump to use the methods and processes that the big league players are using — it takes years to gain the knowledge of wisdom to do so properly.
- At first, I relied heavily on process names.
- Then, I learned to progress to detecting based on command line patterns.
- Eventually, I realized that even those were mutable. So I dug deeper — studying the tools themselves, testing them, tracing their impact. That’s where the real magic is: the system-level changes an attacker can’t avoid.
- Each step forward required unlearning, relearning, and a lot of trial and error. For example, did you know that there are some commands that are not logged in Event ID 4688? and that using the pipe (“|”) character in a single command in CMD will split an event into two entries? go further into the rabbit hole
While developing detections, keep the following in mind:
DETECTION DOs:
- Use programs that follow the actual changes (registry, file operations, network) that the tool creates (procmon), which will help you towards understanding their Immutable Artifacts.
- In the context of process creation logs, focus on all known possible command parameter variations (PowerShell’s “EncodedCommand” can be called via at least 24 different flag variations!)
- Use adversary simulation platforms and tools (Atomic Red Team is a great place to start at).
- MITRE’s Summiting the Pyramid.
DETECTION DONTs:
- Rely on preconceived notions of detection standards without verifying and testing yourself — this is a point that caused me to fail many times before.
- Blindly follow detection databases (SIGMA, Elastic, Splunk). They are such a fantastic contribution to the community, but I treat them as lead/idea generators that needs to be tuned to your own environment.
Detection Gone Wrong: The One Where PowerShell Lied
Early in my detection journey, I was proud of a rule I wrote to catch malicious PowerShell use. It looked for powershell.exe
with a base64-encoded command—classic, right?
It worked… until it didn’t.
One day, red team activity sailed right past my rule. Why? They’d used pwsh.exe
(the new name for PowerShell Core starting with version 6.0.), used a different encoding flag, and ran the payload through a dropper that launched PowerShell via System.Management.Automation
.
My rule saw nothing.
Turns out, I’d anchored my detection to how the attack looked, not what it did.
When I retraced the attack using Procmon, I realized it still made the same system-level changes: downloading a payload, writing to disk, and establishing outbound C2 via HTTP. These were the Immutable Artifacts — and I’d missed them because I was too focused on the wrapping paper.
Lesson burned into my brain:
If you’re detecting tactics by name, you’re probably behind the attacker by three steps. Detect what must happen, not what might happen.
Wrapping up
So… if a log falls in the SIEM, does it generate an alert?
Maybe. But that’s not the real question anymore.
The real question is: Does it matter?
Will that alert mean something? Will it be heard in a sea of noise — or buried in the rubble of false positives and misfired logic?
In 2025, my detection philosophy is less about catching every possible log and more about listening for the ones that matter. It’s about anchoring detection in behavior, not branding. About tuning for resilience, not volume. And about accepting that a great detection isn’t one that looks cool — it’s one that works when it matters most.
Detection engineering isn’t a checklist anymore. It’s a design discipline.
And in this forest of signals, I’d rather hear one meaningful alert than a thousand that lead nowhere.
Detection Philosophy Manifesto (TL;DR)
Detect intent, not syntax.
Anchor logic in Immutable Artifacts — not tool names or strings.
Noise kills. Tune hard.
Understand the system before you try to protect it.
Test everything — especially the “standard” rules.
Map to behavior, not branding. A tool is not a threat.
Every detection is a hypothesis. Validate it with telemetry.
Make detections that will still matter when the TTPs evolve.
Further Reading
- Since there are too many links, I have gathered them all to a Notion page that can be accessed HERE. feel free to send me any further links you think fit here
- My Medium’s reading list.
- I recommend to start HERE.
If you enjoyed the article, feel free to connect with me!
https://www.linkedin.com/in/daniel-koifman-61072218b/
https://x.com/KoifSec
https://koifsec.medium.com/