The AI Neutrality Score
Every tool in the directory is scored using the ANS — a four-dimension framework designed to measure how trustworthy, transparent, and rigorous a compliance tool actually is.
How the score works
The ANS is a 0–100 composite score. Each of the four dimensions is scored independently and weighted by importance. A tool scoring above 85 is considered best-in-class. Scores between 60–79 indicate a solid tool with meaningful gaps. Scores below 60 indicate significant concerns that practitioners should investigate before purchasing.
Transparency
Does the vendor clearly disclose how their product works, how evidence is collected, and how audits are conducted?
High-scoring tools publish their methodology publicly, maintain clear audit trails, provide exportable evidence packages, and avoid black-box automation that auditors cannot independently verify.
Scoring bands
Signals we look for
- Public documentation of evidence collection logic
- Audit-ready export formats (PDF, CSV, ZIP)
- Changelog and update history published
- API access for third-party evidence retrieval
- Clear disclosure of what is automated vs. manual
Neutrality
Does the vendor have conflicts of interest that could bias their assessments?
We evaluate vendor relationships with auditors, reseller arrangements, and whether compliance recommendations are influenced by commercial incentives rather than objective control coverage.
Scoring bands
Signals we look for
- No exclusive auditor partnerships
- Framework coverage not tied to revenue deals
- Independent third-party validation of scoring
- Conflict-of-interest policy published
- No reseller arrangements with auditing bodies
Methodology
How rigorous is the technical approach to compliance automation?
We evaluate coverage depth across frameworks, quality of control mapping, evidence validation logic, and how the platform handles ambiguous or overlapping controls across multiple frameworks.
Scoring bands
Signals we look for
- Control mapping with explicit framework citations
- Cross-framework gap analysis capability
- Evidence validation beyond self-attestation
- Regular framework update cadence documented
- Handles overlapping controls across frameworks
Evidence Quality
How strong and verifiable is the evidence the tool produces for auditors?
Auditors need evidence that is timestamped, tamper-evident, and traceable to specific controls. We evaluate whether the platform produces audit-grade artifacts or merely screenshots and self-reported data.
Scoring bands
Signals we look for
- Cryptographic or tamper-evident audit logs
- Direct integration with source systems (AWS, GitHub, etc.)
- Read-only auditor portal
- Evidence linked to specific control requirements
- Automated evidence refresh cadence
Update cadence
Scores are reviewed quarterly. When a vendor ships a material update — new integrations, pricing changes, or methodology shifts — we re-evaluate within 30 days. Score changes are logged in the tool’s changelog so practitioners can track how a vendor has evolved.
Vendors may submit evidence for re-evaluation at any time by emailing hello@compliancedirectory.io. Evidence submissions are evaluated within 14 business days.
Independence policy
No vendor pays to influence their ANS score, their position in the directory, or any editorial content about their tool. Commercial relationships (Pro listings, Enterprise contracts) are strictly limited to listing visibility features — they have zero effect on scoring.
Scorers are required to disclose any prior relationship with a vendor before conducting a review. Tools scored by a reviewer with a conflict of interest are independently verified by a second reviewer.