Behind the Curtain: How the EDR Telemetry Project Approaches Vendor Relations, Evaluations, and Transparency

In the crowded and often confusing market of Endpoint Detection and Response (EDR) solutions, security practitioners are constantly challenged to see beyond marketing claims and understand what’s truly happening under the hood. The EDR Telemetry Project was founded on a simple but powerful idea: to bring clarity to this space by focusing on the telemetry data that EDR products generate. This isn’t about which product has the flashiest dashboard or the most aggressive marketing campaign; it’s about the data.
Today, we are taking a significant step forward in our commitment to transparency. We are introducing a new set of transparency indicators that provide deeper insight into our evaluation process and our relationships with vendors. This article explains why we are launching this initiative, how it works, and what it means for you. We believe that by openly sharing details about vendor access, engagement models, and our stance on NDAs, we can empower the community to make even more informed decisions.
Who This Is For
This article is for the security professionals on the front lines:
- Security practitioners evaluating EDR solutions for their organizations.
- Consultants and MSSPs conducting EDR assessments for clients.
- Security leaders looking to compare vendors based on evidence, not just promises.
- Vendors who want to understand our process and how we work.
- Community members who contribute to our data and validation efforts.
Our audience is technical and aware of the industry landscape. We aim to be professional, neutral, and direct, but also approachable. We’re all in this together, trying to make the industry better.
What the EDR Telemetry Project Is (and Isn’t)
The EDR Telemetry Project is an independent, vendor-neutral initiative focused on evaluating the availability and quality of EDR telemetry. Our goal is to understand what data EDRs actually generate, helping practitioners make informed decisions based on evidence.
It's important to clarify what we are not:
- A detection efficacy test.
- A prevention or blocking comparison.
- A marketing endorsement platform.
Our core focus is, and will always be, on telemetry visibility and validation.
What We Ask of Vendors
To facilitate a thorough and fair evaluation, our requests of vendors are straightforward and designed to be minimally burdensome:
- Time-limited evaluation access or a trial license to the product.
- The ability to generate telemetry in a controlled test environment to verify data output.
That’s it. We do not ask for payment, marketing agreements, or special treatment. Our goal is simply to get hands-on with the product in a way that allows for credible, independent analysis.
How We Engage with Vendors
We believe in open engagement and providing clear context on how we validate information. Our evaluation process follows several distinct validation paths, each with a different level of access and verification. These are validation paths, not product ratings.
-
Independent Verification (Direct Access): This is our gold standard. It is achieved when the project is granted direct, unfettered access to a product for independent testing. This can be provided by the vendor or by a verified, independent member of the community.
-
Evidence-Based (No Direct Access): In this model, vendors provide evidence such as screenshots, data exports, or documentation, but do not grant independent access to the product. While valuable, it is not a substitute for hands-on testing, and we label these results accordingly.
-
Conditional Verification (Access via NDA/Terms): Sometimes, a vendor provides access or evidence but requires a Non-Disclosure Agreement (NDA) or other terms. We may agree if the terms allow for independent validation and neutral publication of our findings. This provides more confidence than evidence-alone, but the restrictions are noted.
The New Transparency Indicators
To make these validation paths immediately clear, we are introducing a set of transparency indicators directly on our platform. You will see a small icon or banner displayed prominently next to each vendor’s name, both in their detailed view and on the main comparison table.
These indicators provide an at-a-glance summary of how the information was verified. Hovering over an indicator will reveal a tooltip with a concise explanation of the validation path, such as:
-
Direct Access: “Validation was performed with direct, independent access to the product.”
-
Community-Verified: “Validation was performed by a verified, independent community member with direct product access.”
-
Evidence-Only: “Validation was based on evidence provided by the vendor, without direct access.”
-
Conditional Access: “Validation was performed under an NDA or other terms. Click to learn more.”
It is important to note that these indicators are factual statements about our process and cannot be altered by vendors. In some cases, a vendor’s validation path may be a combination of these statuses. For instance, if a vendor provided evidence but required a restrictive NDA for direct access that we declined, the indicator would reflect that context.
Our Principled Stance on NDAs
We often get asked why we decline some NDAs. The reason is simple: our independence is not for sale. We prioritize the project's integrity over gaining access to a product. To clarify our position, here are our NDA acceptance criteria:
- What we generally accept: Terms that protect a vendor’s confidential information, such as unpublished product internals, roadmaps, or non-public pricing.
- What we will not accept: Terms that restrict our ability to publish findings, force pre-approval of content, limit comparisons to other vendors, or include gag clauses.
Declining an NDA is a principled decision to uphold our commitment to neutrality and transparency. It is not a judgment on the vendor or their product.
Transparency Over Punishment
Our approach to transparency is informational, not punitive. We do not reduce scores, penalize rankings, or adjust our technical evaluations based on a vendor’s level of engagement. Instead, we:
- Clearly flag any access limitations.
- Separate the validation context from the technical scoring.
- Let you, the practitioner, decide how much weight to assign to these factors.
Community-Driven Validation
A significant portion of our validation data comes from our community of independent practitioners. These contributors are often using these EDR products in real-world environments and can provide invaluable insights. They may publish their evidence publicly or share it privately with the project.
To reduce single-source bias, especially when a contributor may not be fully independent, a second party (often the project owner) will validate the results. This helps us ensure the integrity and accuracy of our data.
Public Evidence and Peer Review
Whenever possible, we make our validation artifacts public on GitHub. This includes:
- Raw logs
- Screenshots
- Methodology notes
- Reproducible outputs
This allows for peer review, independent verification, and community scrutiny. While private submissions are still validated, we believe that public evidence is the gold standard for transparency.
Balancing Ideals with Reality
In a perfect world, every evaluation would be conducted with full, unrestricted access, and all evidence would be publicly available. We know, however, that reality is more complex. Not all vendors are willing or able to provide direct access, and not all community contributors can publish their findings publicly due to their own operational constraints.
Our commitment is to navigate these complexities with honesty. Rather than making assumptions or leaving gaps, we focus on documenting the limitations of our analysis and making our validation paths as clear as possible. This ensures you always have the context needed to understand the scope of our findings.
Why This Matters to You
Ultimately, this entire framework is designed to serve you, the practitioner. In an industry saturated with bold marketing claims, our goal is to provide a source of truth grounded in verifiable evidence. By exposing the boundaries and limitations of each evaluation, we empower you to make more informed trade-offs that align with your organization’s specific needs and risk appetite.
This level of transparency is about building long-term trust. It moves the conversation beyond simple scores or rankings and toward a more nuanced understanding of product capabilities. Our goal isn’t to declare a single “best” product, but to provide honest, contextualized data about what we know, how we know it, and what limitations exist. That is the foundation of a trustworthy, community-driven project.
The Path Forward
Transparency is an ongoing commitment, not a one-time achievement. We welcome community involvement, and our door is always open for vendor engagement. We will not compromise our independence, and we will continue to prioritize evidence over marketing, transparency over convenience, and community over authority.
We hope this article leaves you with a clear understanding of how and why you can trust the data from the EDR Telemetry Project.
