I wanted a personal environment where I could validate detection logic against repeatable attack simulations instead of relying only on vendor content or screenshots. The lab is structured to test how telemetry behaves under actual adversary-like techniques.
The project combines Atomic Red Team tests, Sigma rules, and a self-hosted ELK pipeline. The emphasis is on understanding coverage gaps, false positives, and the data quality issues that appear once detections meet real event streams.
Why Build It
A lot of detection content looks strong on paper but breaks down when telemetry is incomplete or endpoint behavior differs from the expected sample. I wanted an environment where I could prove whether a rule was useful, noisy, or dependent on assumptions that would not survive production.
- Test rules against repeatable ATT&CK techniques.
- Document coverage decisions instead of guessing.
- Create a portfolio artifact that shows practical detection work.
Current Build
The lab uses containerized components where possible so the environment is easy to reset and extend. Attack simulations feed logs into ELK, while Sigma-based rules are tuned to focus on behavior that an analyst could reasonably investigate.
- Credential dumping and suspicious command execution scenarios.
- Rule tuning to reduce duplicate or low-value alerts.
- Write-ups that explain why a detection is worth keeping.
Next Steps
The next phase is broader scenario coverage and more structured case-study documentation. I also want to publish cleaner before-and-after tuning examples so the portfolio shows not just the final rule, but the reasoning behind it.
- Add more lateral movement and persistence techniques.
- Expand supporting dashboards and investigation views.
- Publish concise validation notes for each detection scenario.