This blog post outlines an AI-assisted approach to improving service resilience against zero-day attacks. By deriving a baseline of expected application behavior from code and dependencies—and translating it into draft, deployable detections for human review—we can complement signature-based rules with real-time anomaly monitoring on service hosts.
Service-hosting machines are typically highly managed and more controlled than end-user devices. Deployments and updates are closely monitored, changes are tested, releases are staged, and change-management reviews help limit unexpected modifications. Even so, services can still be vulnerable to zero-day attacks triggered by software bugs, race conditions, weaknesses in dependencies, or misconfigurations. Detecting and responding to exploitation of unknown vulnerabilities is inherently difficult.
This approach helps detect suspicious activity that may indicate exploitation of unknown vulnerabilities by focusing on expected behavior. Rather than continually defining “bad” activity and new malicious patterns, we monitor service nodes and the applications running on them for normal operation using service and machine telemetry; when activity deviates from that baseline, we can surface it for triage and investigation—even if we don’t yet know the root cause. Because anomaly signals can be noisy, these detections are best treated as leads that require human review and may include false positives.
To enable this, an AI-assisted agent analyzes the application’s code, dependencies, and documentation to produce an “expected behavior” profile for a given service. From that profile, it generates draft “good behavior” detections that can be implemented in detection platforms—for example, as SIGMA rules. The agent does not make deployment decisions on its own: teams review the generated detections, decide what to deploy, and iteratively refine them based on operational feedback.
Next, we deploy the new SIGMA rules alongside the application and an on-the-box, real-time monitoring service that watches for anomalous activity. The service can surface additional activities that may be worth a closer look, even when they don’t match a SIGMA rule—for example, file system access, network connections, and process creation—so analysts can validate context, tune detections, and determine whether follow-up is needed.
Conclusion
Zero-days are, by definition, hard to defend against with signature-first approaches alone. By using an AI-assisted agent to derive an application’s expected behavior from its code, dependencies, and documentation, we can generate draft “good behavior” detections for human review, and pair approved rules with on-the-box, real-time monitoring to help surface potential anomalies outside those rules for analyst review.
The outcome is a stronger, service-specific monitoring baseline that can reduce blind spots and support triage and investigation when something deviates from normal. More broadly, AI-assisted systems can support anomaly-detection workflows by accelerating the creation and updating of behavior baselines, tailoring draft detections to each service and release, and helping translate application context into actionable monitoring—while keeping deployment and response decisions with human operators.