-
Notifications
You must be signed in to change notification settings - Fork 0
Closed
Labels
enhancementNew feature or requestNew feature or request
Description
Problem
The current JsonSchemaCheckIT test output is not defensible upon detailed examination. While it shows individual test failures and skips, it lacks comprehensive metrics that would allow us to make credible claims about compatibility percentages.
Current Issues
- No aggregate statistics: We can see individual failures but have no running totals
- Unclear skip categorization: All skips are treated the same without distinguishing between:
- Unsupported schema groups (compile-time failures)
- Test execution exceptions
- Validation mismatches in lenient mode
- No structured output: Results can't be easily consumed by tools or CI systems
- Manual percentage calculation: The ~70% compatibility claim is an estimate, not measured
Proposed Solution
Implement comprehensive metrics collection in JsonSchemaCheckIT with:
Metrics to Track
- Groups discovered: Total test groups found in test suite
- Tests discovered: Total individual tests found
- Validations run: Actual validation attempts made
- Passed/Failed: Clear success/failure counts
- Skipped categories:
unsupportedSchemaGroup: Whole groups skipped at compile timetestException: Individual tests that threw exceptionslenientMismatch: Expected≠actual in lenient mode
Output Format
Console summary line:
JSON-SCHEMA SUITE (LENIENT): groups=142 testsScanned=1234 run=987 passed=701 failed=0 skipped={unsupported=81, exception=35, lenientMismatch=225}
Optional Artifacts
When -Djson.schema.metrics=json or -Djson.schema.metrics=csv:
- Write
target/json-schema-compat.{json|csv}with structured data - Include per-file breakdowns for detailed analysis
Acceptance Criteria
- Preserve existing behavior: Strict vs lenient semantics unchanged
- Thread-safe: Use concurrent data structures for parallel execution
- Zero dependencies: No additional libraries required
- Backwards compatible: Existing test runs work exactly as before
- Actionable metrics: Enable data-driven compatibility improvements
Implementation Approach
The solution involves:
- Adding a
SuiteMetricsclass withLongAddercounters - Hooking metrics collection at existing decision points
- Adding
@AfterAllmethod for final reporting - Optional JSON/CSV export functionality
This will provide defensible, repeatable metrics for compatibility claims and help prioritize implementation efforts based on actual test coverage data.
EOF < /dev/null
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request