Kostas
Kostas
8 min read·2025-11-26

A Deep Dive into the EDR Telemetry Project's Direct Testing Methodology

At the EDR Telemetry project, our mission is to bring clarity and transparency to the world of Endpoint Detection and Response (EDR) solutions. We believe that security professionals deserve to know exactly what telemetry they are getting from their EDR products, without having to rely on vendor marketing claims. That's why we've developed a rigorous, hands-on testing methodology that is built on a foundation of evidence and reproducibility.

In this blog post, we're going to pull back the curtain and give you a deep dive into our direct testing methodology. We'll walk you through our five core principles, provide concrete examples of how we test, and show you how we ensure that our results are objective and reliable.

The "Why" Behind Our Methodology

Before we get into the nitty-gritty of our testing process, it's important to understand the philosophy that drives it. Our methodology is designed to answer a simple question: What telemetry does an EDR product provide, out-of-the-box, when a specific, controlled action is performed on a host?

To answer this question, we've built our methodology around three key pillars:

  • Hands-On Execution: We don't rely solely on documentation or vendor claims. We run real, controlled tests on actual systems.
  • Raw Telemetry Collection: We capture the raw, unfiltered telemetry generated by the EDR product. We're interested in the ground truth, not just high-level alerts.
  • Evidence-Based Scoring: Every score we assign is backed by concrete evidence, including the specific test that was run, the raw events that were (or were not) produced, and screenshots or log extracts of the outcome.

Now, let's take a closer look at the five core principles that make up our methodology.

The Five Core Principles of Our Methodology

Our direct testing methodology is built on five core principles that ensure our evaluations are fair, consistent, and transparent.

1. Controlled Activity Execution

We believe the only reliable way to evaluate an EDR product's telemetry is to execute real, controlled actions on a host and observe what the product actually records. Alongside this hands-on testing, we also review the vendor's documentation to understand how they claim their telemetry should function and to verify alignment between expected and observed behavior.

To support this process, we've developed our own open-source telemetry-generator frameworks for both Windows and Linux:

These frameworks are designed to intentionally trigger specific types of telemetry logs, covering a wide range of activities that are relevant to security professionals, such as process activity, scheduled tasks, user account changes, and more.

For example, to test an EDR product's ability to capture events related to the creation of a scheduled task on Windows, we use our Windows Telemetry Generator to execute a specific test that creates a new scheduled task. This allows us to see exactly what telemetry the EDR product generates in response to this action.

2. Event Capture and Raw Evidence Collection

During every test, we capture the raw, unfiltered telemetry generated by the EDR product. This is a critical part of our methodology, as it allows us to see exactly what an analyst would see during a real-world investigation. We don't rely on high-level alerts or enriched data; we go straight to the source.

This raw evidence can take many forms, including:

  • Event logs or event tables
  • Raw system activity the platform exposes
  • File, process, network, and kernel/eBPF entries
  • Any metadata tied to the event (timestamps, PIDs, user IDs, etc.)

By focusing on raw telemetry, we can provide a much more accurate and granular assessment of an EDR product's capabilities, ensuring that the events are both searchable and explicit about the action that actually took place.

3. Scoring Based on Explicit Events Only: The Importance of Direct Telemetry

When it comes to scoring, we have a very simple rule: we only score based on direct, explicit events. This means that for an EDR product to receive a "Yes" score for a particular telemetry category, it must generate a clear, unambiguous event that directly corresponds to the action that was performed.

But why is this distinction so important? It all comes down to the real-world needs of security analysts. When an investigator is tasked with responding to an incident, they need to be able to quickly and easily understand what happened. They need searchable, explicit events that clearly show what actions took place. Relying on indirect or circumstantial events makes this process much more difficult and time-consuming.

Furthermore, when vendors provide direct, explicit events, they empower analysts to build more effective detection rules and threat hunting queries without needing specialized knowledge of the underlying operating system or the EDR product's internal workings. In many investigations, the initial alert is just the starting point. Analysts need to be able to pivot and search for related activity, and having explicit events makes this process much more efficient and reliable.

Here's a breakdown of what we consider a direct event versus an indirect event:

Direct Events (What Counts)Indirect Events (What Doesn't Count)
A clear event for a user being added(Linux) File attribute changes instead of a user modification event
A clear event for a scheduled task being created(Linux) Cron process execution actions instead of scheduled-task creation
A direct handle-opening or remote thread-creation eventGeneric "process created" event
A direct service creation event(Windows) Registry modification event under registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services

We never use indirect or circumstantial events as evidence. If the EDR does not expose the direct event type, it cannot be scored. This strict criterion ensures that our scoring reflects a product's ability to provide clear, actionable data for threat hunting and incident response.

4. Evidence-Backed Comparison

Transparency is at the heart of the EDR Telemetry project. That's why every score we produce is backed by a comprehensive set of evidence, including:

  • The specific test that triggered the activity
  • The raw events that were (or were not) produced
  • Screenshots or log extracts showing the outcome
  • A clear mapping between expected vs. observed telemetry

This evidence-backed approach ensures that our results are not only accurate but also independently verifiable. We want vendors and customers to be able to see exactly how we reached our conclusions.

5. Out-of-the-Box Testing Only

Finally, all of our tests are conducted on EDR products in their default, out-of-the-box configuration. We do not enable optional features, add custom rules, or tune the product in any way. This is because we want to provide a realistic baseline of what a customer can expect from a product without any additional engineering effort.

By testing in the default state, we can provide a clear and accurate picture of an EDR product's baseline telemetry visibility.

Putting It All Together: A Walkthrough Example

To see how our methodology works in practice, let's walk through a quick example. Let's say we want to test an EDR product's ability to detect the creation of a new user account on Linux.

  1. Controlled Activity Execution: We use our Linux Telemetry Generator to run the UserAccountEvents test, which creates a new user account using the libuser library.
  2. Event Capture and Raw Evidence Collection: We monitor the EDR product's console and log files to see what telemetry is generated in response to the test.
  3. Scoring Based on Explicit Events Only: We analyze the captured telemetry to see if there is a direct, explicit event that indicates the creation of a new user account. If we see a clear "user created" event, the product receives a "Yes" score. This is because an analyst can immediately understand what happened and can quickly search for other similar events across the enterprise.
  4. Evidence-Backed Comparison: We document the entire process, including the test that was run, the raw telemetry that was generated, and a clear explanation of why we assigned the score that we did.

Conclusion

Our direct testing methodology is designed to be rigorous, transparent, and objective. By focusing on hands-on execution, raw telemetry, and evidence-based scoring, we can provide security professionals with the information they need to make informed decisions about their EDR solutions.

We believe that transparency is essential for a healthy and competitive EDR market. We encourage you to explore our EDR Telemetry Comparison Table and to get involved in the project by contributing your own findings. Together, we can help raise the bar for EDR telemetry and empower the community to build more effective detection and response programs.

References