diff --git a/.auxiliary/configuration/conventions.md b/.auxiliary/configuration/AGENTS.md
similarity index 50%
rename from .auxiliary/configuration/conventions.md
rename to .auxiliary/configuration/AGENTS.md
index fafa5bc..ca991db 100644
--- a/.auxiliary/configuration/conventions.md
+++ b/.auxiliary/configuration/AGENTS.md
@@ -1,7 +1,5 @@
# Context
-
-
- Project overview and quick start: README.rst
- Product requirements and goals: documentation/prd.rst
- System architecture and design: @documentation/architecture/
@@ -9,13 +7,45 @@
- Current session notes and TODOs: @.auxiliary/notes/
- Use the 'context7' MCP server to retrieve up-to-date documentation for any SDKs or APIs.
+- Use the 'librovore' MCP server to search structured documentation sites with object inventories (Sphinx-based, compatible MkDocs with mkdocstrings). This bridges curated documentation (context7) and raw scraping (firecrawl).
- Check README files in directories you're working with for insights about architecture, constraints, and TODO items.
- Update files under `.auxiliary/notes` during conversation, removing completed tasks and adding emergent items.
+
+# OpenSpec Instructions
+
+These instructions are for AI assistants working in this project.
+
+Always open `@/openspec/AGENTS.md` when the request:
+- Mentions planning or proposals (words like proposal, spec, change, plan)
+- Introduces new capabilities, breaking changes, architecture shifts, or big performance/security work
+- Sounds ambiguous and you need the authoritative spec before coding
+
+Use `@/openspec/AGENTS.md` to learn:
+- How to create and apply change proposals
+- Spec format and conventions
+- Project structure and guidelines
+
+Keep this managed block so 'openspec update' can refresh the instructions.
+
+
+
+# Development Standards
+
+Before implementing code changes, consult these files in `.auxiliary/instructions/`:
+- `practices.rst` - General development principles (robustness, immutability, exception chaining)
+- `practices-python.rst` - Python-specific patterns (module organization, type annotations, wide parameter/narrow return)
+- `nomenclature.rst` - Naming conventions for variables, functions, classes, exceptions
+- `style.rst` - Code formatting standards (spacing, line length, documentation mood)
+- `validation.rst` - Quality assurance requirements (linters, type checkers, tests)
+
# Operation
- Use `rg --line-number --column` to get precise coordinates for MCP tools that require line/column positions.
- Choose appropriate editing tools based on the task complexity and your familiarity with the tools.
+- Use the 'pyright' MCP server where appropriate:
+ - `rename_symbol` for refactors
+ - `references` for precise symbol analysis
- Batch related changes together when possible to maintain consistency.
- Use relative paths rather than absolute paths when possible.
- Do not write to paths outside the current project unless explicitly requested.
@@ -24,8 +54,7 @@
# Commits
- Use `git status` to ensure all relevant changes are in the changeset.
-- Use the `python-conformer` agent to review changes that include Python code before committing.
-- Do **not** commit without explicit user approval. Unless the user has requested the commit, ask for a review of your edits first.
+- Do **not** commit without explicit user approval. Unless the user has requested the commit, **ask first** for a review of your work.
- Use present tense, imperative mood verbs (e.g., "Fix" not "Fixed").
- Write sentences with proper punctuation.
- Include a `Co-Authored-By:` field as the final line. Should include the model name and a no-reply address.
diff --git a/.auxiliary/configuration/claude/agents/python-conformer.md b/.auxiliary/configuration/claude/agents/python-conformer.md
deleted file mode 100644
index 4733aa0..0000000
--- a/.auxiliary/configuration/claude/agents/python-conformer.md
+++ /dev/null
@@ -1,312 +0,0 @@
----
-name: python-conformer
-description: Use this agent ONLY when changes include Python code (.py and .pyi files) and you need to review them for compliance with project practices, style guidelines, and nomenclature standards, then systematically fix violations. Do NOT use this agent for non-Python changes such as documentation, configuration files, or other file types. Examples: Context: The user has just written a new Python function and wants to ensure it follows project standards. user: 'I just wrote this function for processing user data. Can you review it?' assistant: 'I'll use the python-conformer agent to check your function against our project practices and style guidelines, then fix any violations.' Since the user wants code reviewed for compliance, use the python-conformer agent to analyze the code against project standards. Context: The user has completed a module refactor and wants to verify compliance before committing. user: 'I've finished refactoring the authentication module. Please check if it meets our coding standards.' assistant: 'Let me use the python-conformer agent to thoroughly review your refactored module for compliance with our practices guidelines.' The user needs compliance verification for recently refactored code, so use the python-conformer agent. Context: The user wants to review staged Python changes before committing. user: 'I've modified several Python modules. Please review my staged changes for compliance before I commit.' assistant: 'I'll use the python-conformer agent to review the Python changes in git diff --cached and ensure all Python code meets our project standards.' Pre-commit review of staged Python changes is a perfect use case for the python-conformer agent.
-model: sonnet
-color: red
----
-
-You are an expert software engineer specializing in Python code quality assurance and
-compliance conformance. Your primary responsibility is to systematically review Python code
-against established project practices, style guidelines, and nomenclature
-standards, then apply comprehensive remediation to bring code into full compliance.
-
-**IMPORTANT**: Only review and modify Python (.py and .pyi) files. If the
-changes do not include Python code, politely decline and explain that you are
-specifically for Python code compliance review.
-
-## Prerequisites
-
-- **Read project documentation guides FIRST**:
- - @.auxiliary/instructions/practices.rst
- - @.auxiliary/instructions/style.rst
- - @.auxiliary/instructions/nomenclature.rst
-- Have read `CLAUDE.md` for project-specific guidance
-
-## EXECUTION STRUCTURE
-
-**PHASE 1: COMPREHENSIVE REVIEW**
-Perform complete analysis and generate detailed compliance report before making any changes.
-
-**PHASE 2: SYSTEMATIC REMEDIATION**
-Apply all identified fixes in systematic order, validating with linters after completion.
-
-## COMPLIANCE STANDARDS
-
-### Design Standards
-
-#### 1. Module Organization
-
-**Content Order:**
-1. Imports (following practices guide patterns)
-2. Common type aliases (`TypeAlias` declarations)
-3. Private variables/functions for defaults (grouped semantically)
-4. Public classes and functions (alphabetical)
-5. All other private functions (alphabetical)
-
-**Scope and Size:**
-- Maximum 600 lines
-- Action: Analyze oversized modules with separation of concerns in mind.
-Suggest splitting into focused modules with narrower responsibilities or
-functionality.
-
-#### 2. Imports
-
-- At the module level, other modules and their attributes MUST be imported as
- private aliases, except in `__init__`, `__`, or specially-designated
- re-export modules.
-- Within function bodies, other modules and their attributes MAY be imported as
- public variables.
-- Subpackages SHOULD define a special `__` re-export module, which has `from
- ..__ import *` plus any other imports which are common to the subpackage.
-- Common modules, such as `os` or `re`, SHOULD be imported as public within the
- special package-wide `__.imports` re-export module rather than as private
- aliases within an implementation module.
-- The `__all__` attribute SHOULD NOT be provided. This is unnecessary if the
- module namespace only contains public classes and functions which are part of
- its interface; this avoid additional interface maintenance.
-
-#### 3. Dependency Injection
-
-- Ask: is this function testable without monkeypatching?
-- Functions SHOULD provide injectable parameters with sensible defaults instead
- of hard-coded dependencies within function implementation.
-
-#### 4. Robustness Principle (Postel's Law)
-"Be conservative in what you send; be liberal in what you accept."
-
-- Public functions SHOULD define wide, abstract argument types.
-- All functions SHOULD define narrow, concrete return types.
-- Private functions MAY define narrow, concrete argument types.
-
-#### 5. Immutability
-
-- Classes SHOULD inherit from immutable classes (`__.immut.Object`,
- `__.immut.Protocol`, `__.immut.DataclassObject`, etc...).
-- Functions SHOULD return values of immutable types (`None`, `int`, `tuple`,
- `frozenset`, `__.immut.Dictionary`, etc...) and not mutable types (`list`,
- `dict`, `set`, etc...).
-
-#### 6. Proper Exception Management
-
-- One `try .. except` suite per statement which can raise exceptions. I.e.,
- avoid covering multiple statements with a `try` block whenever possible.
-- Tryceratops complaints MUST NOT be suppressed with `noqa` pragmas.
-- Bare exceptions SHOULD NOT be raised.
- - Exemption: `NotImplementedError` MAY be raised as a bare exception.
- - Relevant exception classes SHOULD be used from the relevant `exceptions`
- module within the package.
- - New exception classes MAY be created as needed within the relevant
- `exceptions` module; these MUST follow the nomenclature guide and be
- inserted in correct alphabetical order.
-
-### Quality Assurance
-
-#### 1. Linter Suppressions
-
-- Linter suppressions MUST be reviewed critically.
-- Linter complaints SHOULD NOT be suppressed via `noqa` or `type` pragmas
- without compelling justification.
-- Suppressions that mask design problems MUST be investigated and resolved
- rather than ignored.
-
-**Acceptable Suppressions:**
-- `noqa: PLR0913` MAY be used for a CLI or service API with many parameters,
- but data transfer objects SHOULD be considered in most other cases.
-- `noqa: S*` MAY be used for properly constrained and vetted subprocess
- executions or Internet content retrievals.
-
-**Unacceptable Suppressions (require investigation):**
-- `type: ignore` MUST NOT be used, except in extremely rare circumstances. Such
- suppressions usually indicate missing third-party dependencies or type stubs,
- inappropriate type variables, or a bad inheritance pattern.
-- `__.typx.cast` SHOULD NOT be used, except in extremely rare circumstances.
- Such casts suppress normal type checking and usually the same problems as
- `type: ignore`.
-- Most other `noqa` suppressions.
-
-### Style Standards
-
-#### 1. Spacing and Delimiters
-
-- Space padding MUST be present inside delimiters.
- - Format: `( arg )`, `[ item ]`, `{ key: value }`
- - Format: `( )`, `[ ]`, `{ }`, not `()`, `[]`, `{}`
-- Space padding MUST be present around keyword argument `=`.
- - Format: `foo = 42`
-
-#### 2. Strings
-
-- Docstrings MUST use triple single quotes with narrative mood.
- - Format: `''' Processes data... '''` not `"""Process data..."""`
-- F-strings and `.format` strings MUST be enclosed in double quotes.
- - Format: `f"text {variable}"`, not `f'text {variable}'`
- - Format: `"text {count}".format( count = len( items ) )`
-- F-strings and format strings MUST NOT embed function calls.
-- Exception messages and log messages SHOULD be enclosed in double quotes
- rather than single quotes.
-- Plain data strings SHOULD be enclosed in single quotes, unless they contain
- single quotes.
-
-#### 3. Vertical Compactness
-
-- Blank lines MUST NOT appear within function bodies.
-- Vertical compactness MUST be maintained within function implementations.
-- Single-line statements MAY follow certain block keywords on the same line
- when appropriate.
- - Format: `if condition: return value`
- - Format: `elif condition: continue`
- - Format: `else: statement`
- - Format: `try: statement`
-
-#### 4. Multi-line Constructs
-
-- Function invocations, including class instantiations, SHOULD place the
- closing `)` on the same line as the last argument to the function.
-- The last argument of an invocation MUST NOT be followed by a trailing comma.
-- Comprehensions and generator expressions SHOULD place the closing delimiter
- on the same line as the last statement in the comprehension or generator
- expression.
-- Parenthetical groupings SHOULD place the closing delimiter on the same line
- as the last statement in the grouping.
-- All other multi-line constructs (functions signatures, annotations, lists,
- dictionaries, etc...) MUST place the closing delimiter on a separate line
- following the last item and MUST dedent the closing delimiter to match the
- opening line indentation.
-- If a closing delimiter is not on the same line as the last item in a
- multi-line construct, then the last item MUST be followed by a trailing
- comma.
-
-#### 5. Nomenclature
-
-- Argument, attribute, and variable names SHOULD NOT be compound words,
- separated by underscores, except in cases where this is necessary to
- disambiguate.
-- Argument and variable names SHOULD NOT duplicate parts of the function name.
-- Attribute names SHOULD NOT duplicate parts of the class name.
-- Class names SHOULD adhere to the nomenclature guide.
-- Function names SHOULD adhere to the nomenclature guide.
-
-#### 6. Comments
-
-- Comments that describe obvious behavior SHOULD NOT be included.
-- TODO comments SHOULD be added for uncovered edge cases and future work.
-- Comments MUST add meaningful context, not restate what the code does.
-
-### Comprehensive Example: Real-World Function with Multiple Violations
-
-Here is a function that demonstrates many compliance violations:
-
-```python
-def _group_documents_by_field(
- documents: list[ dict[ str, __.typx.Any ] ],
- field_name: __.typx.Optional[ str ]
-) -> dict[ str, list[ dict[ str, __.typx.Any ] ] ]:
- ''' Groups documents by specified field for inventory format compatibility.
- '''
- if field_name is None:
- return { }
-
- groups: dict[ str, list[ dict[ str, __.typx.Any ] ] ] = { }
- for doc in documents:
- # Get grouping value, with fallback for missing field
- group_value = doc.get( field_name, f'(missing {field_name})' )
- if isinstance( group_value, ( list, dict ) ):
- # Handle complex field types by converting to string
- group_value = str( group_value ) # type: ignore[arg-type]
- elif group_value is None or group_value == '':
- group_value = f'(missing {field_name})'
- else:
- group_value = str( group_value )
-
- if group_value not in groups:
- groups[ group_value ] = [ ]
-
- # Convert document format back to inventory object format
- inventory_obj = {
- 'name': doc[ 'name' ],
- 'role': doc[ 'role' ],
- 'domain': doc.get( 'domain', '' ),
- 'uri': doc[ 'uri' ],
- 'dispname': doc[ 'dispname' ]
- }
- if 'fuzzy_score' in doc:
- inventory_obj[ 'fuzzy_score' ] = doc[ 'fuzzy_score' ]
- groups[ group_value ].append( inventory_obj )
- return groups
-```
-
-**Violations identified:**
-1. **Narrow parameter types**: `list[dict[...]]` instead of wide `__.cabc.Sequence[__.cabc.Mapping[...]]`
-2. **Type suppression abuse**: `# type: ignore[arg-type]` masks real design issue
-3. **Mutable container return**: Returns `dict` instead of `__.immut.Dictionary`
-4. **Function body blank lines**: Empty lines breaking vertical compactness
-5. **Vertical compactness**: `return { }` could be same line as `if`
-6. **Unnecessary comments**: "Handle complex field types by converting to string" states obvious
-7. **F-string quotes**: Using single quotes in f-strings instead of double
-8. **Nomenclature duplication**: `group_value` repeats "group" from function name
-9. **Underscore nomenclature**: `field_name` could be `field`, `group_value` could be `value`
-10. **Mutable container creation**: Using `{ }` and `[ ]` instead of immutable alternatives
-11. **Trailing comma**: Missing trailing comma in dictionary, affecting delimiter placement
-12. **Single-line else**: `group_value = str(group_value)` could be same line as `else`
-13. **Design pattern**: Could use `collections.defaultdict` instead of manual initialization
-
-**AFTER - Corrected version:**
-```python
-def _group_documents_by_field(
- documents: __.cabc.Sequence[ __.cabc.Mapping[ str, __.typx.Any ] ],
- field: __.typx.Absential[ str ] = __.absent,
-) -> __.immut.Dictionary[
- str, tuple[ __.cabc.Mapping[ str, __.typx.Any ], ... ]
-]:
- ''' Groups documents by specified field. '''
- if __.is_absent( field ): return __.immut.Dictionary( )
- groups = __.collections.defaultdict( list )
- for doc in documents:
- value = doc.get( field, f"(missing {field})" )
- if isinstance( value, ( list, dict ) ): value = str( value )
- elif value is None or value == '': value = f"(missing {field})"
- else: value = str( value )
- obj = __.immut.Dictionary(
- name = doc[ 'name' ],
- role = doc[ 'role' ],
- domain = doc.get( 'domain', '' ),
- uri = doc[ 'uri' ],
- dispname = doc[ 'dispname' ],
- **( { 'fuzzy_score': doc[ 'fuzzy_score' ] }
- if 'fuzzy_score' in doc else { } ) )
- groups[ value ].append( obj )
- return __.immut.Dictionary(
- ( key, tuple( items ) ) for key, items in groups.items( ) )
-```
-
-## REVIEW REPORT FORMAT
-
-**PHASE 1 OUTPUT:**
-1. **Compliance Summary**: Overall assessment with file-by-file breakdown
-2. **Standards Violations**: Categorized list with specific line references and explanations
-3. **Complexity Analysis**: Function and module size assessments
-4. **Remediation Plan**: Systematic order of fixes to be applied
-5. **Risk Assessment**: Any changes that require careful validation
-
-**PHASE 2 OUTPUT:**
-1. **Applied Fixes**: Summary of all changes made, categorized by standard
-2. **Validation Results**: Linter output before and after changes
-3. **Files Modified**: Complete list with brief description of changes
-4. **Manual Review Required**: Any issues requiring human judgment
-
-## TOOL PREFERENCES
-
-- **Precise coordinates**: Use `rg --line-number --column` for exact line/column positions
-- **File editing**: Prefer `text-editor` MCP tools for line-based edits to avoid conflicts
-- **File synchronization**: Always reread files with `text-editor` tools after modifications by other tools (like `pyright` or `ruff`)
-- **Batch operations**: Group related changes together to minimize file modification conflicts between different MCP tools
-
-## EXECUTION REQUIREMENTS
-
-- **PHASE 1 REQUIRED**: Complete review and report before any remediation
-- **PHASE 2 REQUIRED**: Apply fixes systematically, validate with `hatch --env develop run linters`
-- **Validation command**: `hatch --env develop run linters` must produce clean output before completion
-- **Focus on compliance**: Maintain exact functionality while improving standards adherence
-- **Reference specific lines**: Always include line numbers and concrete examples
-- **Document reasoning**: Explain why each standard matters and how fixes align with project practices
-- **Guide access**: If any prerequisite guide cannot be accessed, stop and inform the user
diff --git a/.auxiliary/configuration/claude/commands/cs-annotate-release.md b/.auxiliary/configuration/claude/commands/cs-annotate-release.md
deleted file mode 100644
index 2c5f3af..0000000
--- a/.auxiliary/configuration/claude/commands/cs-annotate-release.md
+++ /dev/null
@@ -1,93 +0,0 @@
----
-allowed-tools: Bash(git log:*), Bash(git show:*), Bash(ls:*), Bash(grep:*), Grep, Read, Write, LS
-description: Create Towncrier news fragments for user-facing changes since last release cleanup
----
-
-# Write Release Notes
-
-**NOTE: This is an experimental workflow! If anything seems unclear or missing,
-please stop for consultation with the user.**
-
-You are tasked with creating Towncrier news fragments for user-facing changes
-since the last release cleanup. This command analyzes recent commits and
-generates appropriate changelog entries.
-
-Special instructions: `$ARGUMENTS`
-(If above line is empty, then no special instructions were given by the user.)
-
-## Context
-
-The project uses Towncrier to manage changelogs. News fragments are stored in
-`.auxiliary/data/towncrier/` and follow specific naming and formatting
-conventions detailed in the [releases
-guide](https://raw.githubusercontent.com/emcd/python-project-common/refs/tags/docs-1/documentation/common/releases.rst).
-
-## Process
-
-### Phase 1: Discovery and Analysis
-
-1. **Find Starting Point**: Use `git log --oneline --grep="Clean up news fragments"` to find the last cleanup commit
-2. **Get Recent Commits**: Retrieve all commits since the cleanup using `git log --no-merges` with full commit messages
-3. **Check Existing Fragments**: List existing fragments in `.auxiliary/data/towncrier/` to avoid duplication
-
-### Phase 2: Filtering and Classification
-
-4. **Filter User-Facing Changes**: Focus on changes that affect how users interact with the tool:
- - CLI command changes (new options, arguments, output formats)
- - API changes (public functions, classes, return values)
- - Behavior changes (different responses, error messages, processing)
- - Configuration changes (new settings, file formats)
- - Deprecations and removals
- - Platform support changes (Python versions, OS support)
-
- **Exclude** internal changes:
- - GitHub workflows
- - Dependency changes without API impact
- - Internal module restructuring that preserves public API
- - Git ignore files
- - Modules in internals subpackages (`__`)
- - Version bumps and maintenance updates
- - Internal refactoring without user-visible changes
-
- **Key Test**: Ask "Does this change how a user invokes the tool, what options they have, or what behavior they observe?"
-
-5. **Classify Changes**: Determine appropriate type for each change:
- - `enhance`: features and improvements
- - `notify`: deprecations and notices
- - `remove`: removals of features or support
- - `repair`: bug fixes
-
- Note: Some commits may contain multiple types of changes.
-
-### Phase 3: Synthesis and Creation
-
-6. **Group Related Commits**: Synthesize multiple commits into coherent user-facing descriptions when they represent logical units of change
-
-7. **Think Through Fragments**: Before writing, consider:
- - Are the descriptions clear and meaningful to users?
- - Do they follow the format guidelines?
- - Are they properly classified?
- - Do they focus on what and why, not how?
-
-8. **Create Fragments**: Write appropriately named fragment files using:
- - `..rst` for changes with GitHub issues
- - `+..rst` for changes without issues
-
- Fragment content should:
- - Start with capital letter, end with period
- - Use present tense imperative verbs
- - Be understandable by users, not just developers
- - Include topic prefixes when appropriate (e.g., "CLI: ", "API: ")
-
-### Phase 4: Final Review and Commit
-
-9. **Summary**: Provide a brief summary of fragments created and any notable patterns or changes identified
-
-10. **Commit Changes**: Add fragments to git and commit them:
- - `git add .auxiliary/data/towncrier`
- - `git commit -m "Add news fragments for upcoming release"`
-
-## Additional Instructions
-
-- Read full commit messages for context; only examine diff summaries if commit messages are unclear
-- Focus on meaningful user-facing changes rather than comprehensive coverage of all commits
diff --git a/.auxiliary/configuration/claude/commands/cs-architect.md b/.auxiliary/configuration/claude/commands/cs-architect.md
deleted file mode 100644
index 1e3fa4e..0000000
--- a/.auxiliary/configuration/claude/commands/cs-architect.md
+++ /dev/null
@@ -1,102 +0,0 @@
----
-allowed-tools: [Read, Write, Edit, MultiEdit, LS, Glob, Grep, Bash(find:*), Bash(ls:*), Bash(tree:*)]
-description: Architectural analysis, system design decisions, and ADR creation
----
-
-# System Architecture Analysis
-
-Analyze architectural decisions, system design patterns, component
-relationships, and technical trade-offs to provide guidance on high-level
-system structure and cross-component interactions.
-
-Request from user: $ARGUMENTS
-
-## Context
-
-- Product requirements: @documentation/prd.rst
-- Architecture overview: @documentation/architecture/summary.rst
-- Filesystem patterns: @documentation/architecture/filesystem.rst
-- Architecture guidelines: @.auxiliary/instructions/architecture.rst
-- Nomenclature standards: @.auxiliary/instructions/nomenclature.rst
-- Germanic naming variants: @.auxiliary/instructions/nomenclature-germanic.rst
-- Current project state: !`ls documentation/architecture/`
-
-## Prerequisites
-
-Before providing architectural analysis, ensure:
-- Understanding of current system architecture and constraints
-- Familiarity with architectural decision record (ADR) format
-- Knowledge of standard filesystem organization patterns
-- @.auxiliary/instructions/architecture.rst guidelines are followed
-
-## Process Summary
-
-Key functional areas:
-1. **Analysis**: Examine architectural context and design forces
-2. **System Structure**: Define component relationships and system boundaries
-3. **Decision Framework**: Apply architectural principles and trade-off analysis
-4. **Documentation**: Create ADRs or update architectural documentation
-5. **Validation**: Ensure decisions align with project constraints and goals
-
-## Safety Requirements
-
-Stop and consult the user if:
-- Implementation details are requested instead of architectural guidance
-- Specific code changes are needed
-- Requirements analysis is needed
-- Filesystem organization or module structure details are requested
-- Architectural decisions have significant impact on existing system components
-- Decision conflicts with existing architectural patterns or constraints
-- Decision requires changes to fundamental system assumptions
-
-## Execution
-
-Execute the following steps:
-
-### 1. Architectural Context Analysis
-Review current architecture and identify relevant patterns:
-- Examine existing architectural documentation
-- Understand system boundaries and component relationships
-- Identify architectural forces and constraints
-- Assess alignment with project goals and requirements
-
-### 2. Design Forces Assessment
-Analyze the forces driving the architectural decision:
-- Technical constraints (performance, scalability, compatibility)
-- Quality attributes (maintainability, testability, security)
-- Integration requirements with existing components
-- Future flexibility and evolution needs
-
-### 3. Alternative Evaluation
-Consider multiple architectural approaches:
-- Document all seriously considered alternatives
-- Analyze trade-offs for each option (benefits, costs, risks)
-- Consider "do nothing" as a baseline alternative
-- Evaluate alignment with established architectural patterns
-- Assess implementation complexity and maintenance burden
-
-### 4. Decision Recommendation
-Provide clear architectural guidance:
-- State recommended approach with clear rationale
-- Explain how decision addresses the identified forces
-- Document expected positive and negative consequences
-- Include specific architectural patterns or principles applied
-- Provide text-based diagrams or examples when helpful
-
-### 5. Documentation Creation
-When appropriate, create or update architectural documentation:
-- Generate ADRs following the standard format
-- Update `documentation/architecture/decisions/index.rst` to include new ADRs
-- Update architecture summary for significant system changes
-- Ensure consistency with filesystem organization patterns
-- Reference related architectural decisions and dependencies
-
-### 6. Implementation Guidance
-Provide high-level implementation direction without specific code:
-- Suggest component organization and interfaces
-- Recommend integration patterns with existing system
-- Identify key architectural boundaries and abstractions
-- Highlight critical implementation considerations
-
-### 7. Summarize Updates
-Provide concise summary of updates to the user.
diff --git a/.auxiliary/configuration/claude/commands/cs-code-python.md b/.auxiliary/configuration/claude/commands/cs-code-python.md
deleted file mode 100644
index 9026023..0000000
--- a/.auxiliary/configuration/claude/commands/cs-code-python.md
+++ /dev/null
@@ -1,142 +0,0 @@
----
-allowed-tools: [Read, Write, Edit, MultiEdit, LS, Glob, Grep, Bash, TodoWrite, mcp__text-editor__get_text_file_contents, mcp__text-editor__edit_text_file_contents, mcp__ruff__diagnostics, mcp__ruff__edit_file, mcp__ruff__hover, mcp__ruff__references, mcp__ruff__rename_symbol, mcp__ruff__definition, mcp__pyright__diagnostics, mcp__pyright__edit_file, mcp__pyright__hover, mcp__pyright__references, mcp__pyright__rename_symbol, mcp__pyright__definition, mcp__context7__resolve-library-id, mcp__context7__get-library-docs]
-description: Python implementation following established patterns and practices
----
-
-# Python Implementation
-
-Implement Python code following established patterns including functions,
-classes, modules, tests, and refactoring while adhering to project practices
-and style guidelines.
-
-Request from user: $ARGUMENTS
-
-## Context
-
-- Architecture overview: @documentation/architecture/summary.rst
-- Filesystem patterns: @documentation/architecture/filesystem.rst
-- Python practices: @.auxiliary/instructions/practices.rst
-- Code style: @.auxiliary/instructions/style.rst
-- Nomenclature: @.auxiliary/instructions/nomenclature.rst
-- Germanic variants: @.auxiliary/instructions/nomenclature-germanic.rst
-- Design documents: !`ls documentation/architecture/designs/`
-- Current package structure: !`ls sources/`
-
-## Prerequisites
-
-Before implementing Python code, ensure:
-- Understanding of implementation requirements and expected behavior
-- Familiarity with project practices, style, and nomenclature guidelines
-- Knowledge of existing codebase structure and patterns
-- Clear design specifications or existing design documents if referenced
-
-## Process Summary
-
-Key functional areas:
-1. **Requirements Analysis**: Understand implementation requirements and context
-2. **Design Conformance**: Ensure alignment with established patterns and practices
-3. **Implementation**: Write Python code following style guidelines and best practices
-4. **Quality Assurance**: Run linters, type checkers, and tests to validate code
-5. **Documentation**: Provide implementation summary and any necessary documentation
-
-## Safety Requirements
-
-Stop and consult the user if:
-- Design specifications are needed instead of implementation
-- Architectural decisions are required before implementation
-- Requirements are unclear or insufficient for implementation
-- Implementation conflicts with established architectural patterns
-- Code changes would break existing API contracts or interfaces
-- Quality checks reveal significant issues that require design decisions
-- Type checker errors are encountered that cannot be resolved through standard remediation
-- Multiple implementation approaches have significant trade-offs requiring user input
-
-## Execution
-
-Execute the following steps:
-
-### 1. Requirements Analysis
-Analyze implementation requirements and gather context:
-- Review user requirements and any referenced design documents
-- Examine existing codebase structure and relevant modules
-- Identify integration points with existing code
-- Understand expected behavior and edge cases
-- Document implementation scope and constraints
-
-### 2. Design Conformance Checklist
-Ensure implementation aligns with project standards:
-- [ ] Module organization follows practices guidelines (imports → type aliases → defaults → public API → private functions)
-- [ ] Function signatures use wide parameter, narrow return patterns
-- [ ] Type annotations are comprehensive and use proper TypeAlias patterns
-- [ ] Exception handling follows Omniexception → Omnierror hierarchy
-- [ ] Naming follows nomenclature conventions with appropriate linguistic consistency
-- [ ] Immutability preferences are applied where appropriate
-- [ ] Code style follows spacing, vertical compactness, and formatting guidelines
-
-### 3. Implementation
-Write Python code following established patterns:
-- Implement functions, classes, or modules as specified
-- Apply centralized import patterns via `__` subpackage
-- Use proper type annotations with `__.typx.TypeAlias` for complex types
-- Follow style guidelines for spacing, formatting, and structure
-- Implement proper exception handling with narrow try blocks
-- Apply nomenclature patterns for consistent naming
-- Ensure functions are ≤30 lines and modules are ≤600 lines
-
-### 4. Implementation Tracking Checklist
-Track progress against requirements:
-- [ ] All specified functions/classes have been implemented
-- [ ] Required functionality is complete and tested
-- [ ] Integration points with existing code are working
-- [ ] Edge cases and error conditions are handled
-- [ ] Documentation requirements are satisfied
-
-### 5. Quality Assurance
-Validate code quality and conformance following zero-tolerance policy:
-
-#### Linting Validation
-```bash
-hatch --env develop run linters
-```
-All linting issues must be addressed. Do not use `noqa` pragma comments without explicit user approval.
-
-#### Type Checking Validation
-Run type checker and analyze results:
-```bash
-hatch --env develop run linters # Includes Pyright
-```
-
-Type Error Resolution Process:
-1. Code Issues: Fix all type errors in project code immediately
-2. Third-party Stub Issues: If errors are due to missing/incomplete third-party type stubs:
- - Verify package is listed in `pyproject.toml`
- - Rebuild environment: `hatch env prune`
- - Generate stubs: `hatch --env develop run pyright --createsub `
- - Complete necessary stub definitions
- - Re-run type checker to verify resolution
-
-Stop and consult user if:
-- Type errors cannot be categorized as code issues or third-party stub gaps
-- Stub generation fails or requires extensive manual type definitions
-- Multiple conflicting approaches exist for resolving type issues
-
-#### Test Validation
-```bash
-hatch --env develop run testers
-```
-Ensure all tests pass, including any new tests created.
-
-### 6. Documentation and Summary
-Provide implementation documentation:
-- Document any non-obvious design decisions or trade-offs
-- Create or update relevant docstrings following narrative mood guidelines
-- Note any TODO items for future enhancements
-- Verify alignment with filesystem organization patterns
-
-### 7. Summarize Implementation
-Provide concise summary of what was implemented, including:
-- Functions, classes, or modules created or modified
-- Key design decisions and rationale
-- Integration points and dependencies
-- Quality assurance status: Confirm all linters, type checkers, and tests pass
-- Any remaining tasks or follow-up items
diff --git a/.auxiliary/configuration/claude/commands/cs-conform-python.md b/.auxiliary/configuration/claude/commands/cs-conform-python.md
deleted file mode 100644
index 9b9388d..0000000
--- a/.auxiliary/configuration/claude/commands/cs-conform-python.md
+++ /dev/null
@@ -1,372 +0,0 @@
----
-allowed-tools: Bash(hatch --env develop run:*), Bash(git:*), LS, Read, Glob, Grep, Edit, MultiEdit, Write, WebFetch
-description: Systematically conform Python code to project style and practice standards
----
-
-# Python Code Conformance
-
-For bringing existing Python code into full compliance with project standards.
-
-Target code: `$ARGUMENTS`
-
-Focus on style/practice conformance, not functionality changes.
-
-## Prerequisites
-
-- Read project documentation guides first:
- - @.auxiliary/instructions/practices.rst
- - @.auxiliary/instructions/style.rst
- - @.auxiliary/instructions/nomenclature.rst
-- Understand target files to be conformed
-- Have read `CLAUDE.md` for project-specific guidance
-
-## Context
-
-- Current git status: !`git status --porcelain`
-- Current branch: !`git branch --show-current`
-
-## Execution Structure
-
-**Phase 1: Comprehensive Review**
-Perform complete analysis and generate detailed compliance report before making any changes.
-
-**Phase 2: Systematic Remediation**
-Apply all identified fixes in systematic order, validating with linters after completion.
-
-## Compliance Standards
-
-### Design Standards
-
-#### 1. Module Organization
-
-Content Order:
-1. Imports (following practices guide patterns)
-2. Common type aliases (`TypeAlias` declarations)
-3. Private variables/functions for defaults (grouped semantically)
-4. Public classes and functions (alphabetical)
-5. All other private functions (alphabetical)
-
-Scope and Size:
-- Maximum 600 lines
-- Action: Analyze oversized modules with separation of concerns in mind.
-Suggest splitting into focused modules with narrower responsibilities or
-functionality.
-
-#### 2. Imports
-
-- At the module level, other modules and their attributes MUST be imported as
- private aliases, except in `__init__`, `__`, or specially-designated
- re-export modules.
-- Within function bodies, other modules and their attributes MAY be imported as
- public variables.
-- Subpackages SHOULD define a special `__` re-export module, which has `from
- ..__ import *` plus any other imports which are common to the subpackage.
-- Common modules, such as `os` or `re`, SHOULD be imported as public within the
- special package-wide `__.imports` re-export module rather than as private
- aliases within an implementation module.
-- The `__all__` attribute SHOULD NOT be provided. This is unnecessary if the
- module namespace only contains public classes and functions which are part of
- its interface; this avoid additional interface maintenance.
-
-#### 3. Dependency Injection
-
-- Ask: is this function testable without monkeypatching?
-- Functions SHOULD provide injectable parameters with sensible defaults instead
- of hard-coded dependencies within function implementation.
-
-#### 4. Robustness Principle (Postel's Law)
-"Be conservative in what you send; be liberal in what you accept."
-
-- Public functions SHOULD define wide, abstract argument types.
-- All functions SHOULD define narrow, concrete return types.
-- Private functions MAY define narrow, concrete argument types.
-
-#### 5. Immutability
-
-- Classes SHOULD inherit from immutable classes (`__.immut.Object`,
- `__.immut.Protocol`, `__.immut.DataclassObject`, etc...).
-- Functions SHOULD return values of immutable types (`None`, `int`, `tuple`,
- `frozenset`, `__.immut.Dictionary`, etc...) and not mutable types (`list`,
- `dict`, `set`, etc...).
-
-#### 6. Proper Exception Management
-
-- One `try .. except` suite per statement which can raise exceptions. I.e.,
- avoid covering multiple statements with a `try` block whenever possible.
-- Tryceratops complaints MUST NOT be suppressed with `noqa` pragmas.
-- Bare exceptions SHOULD NOT be raised.
- - Exemption: `NotImplementedError` MAY be raised as a bare exception.
- - Relevant exception classes SHOULD be used from the relevant `exceptions`
- module within the package.
- - New exception classes MAY be created as needed within the relevant
- `exceptions` module; these MUST follow the nomenclature guide and be
- inserted in correct alphabetical order.
-
-### Quality Assurance
-
-#### 1. Linter Suppressions
-
-- Linter suppressions MUST be reviewed critically.
-- Linter complaints SHOULD NOT be suppressed via `noqa` or `type` pragmas
- without compelling justification.
-- Suppressions that mask design problems MUST be investigated and resolved
- rather than ignored.
-
-Acceptable Suppressions:
-- `noqa: PLR0913` MAY be used for a CLI or service API with many parameters,
- but data transfer objects SHOULD be considered in most other cases.
-- `noqa: S*` MAY be used for properly constrained and vetted subprocess
- executions or Internet content retrievals.
-
-Unacceptable Suppressions (require investigation):
-- `type: ignore` MUST NOT be used, except in extremely rare circumstances. Such
- suppressions usually indicate missing third-party dependencies or type stubs,
- inappropriate type variables, or a bad inheritance pattern.
-- `__.typx.cast` SHOULD NOT be used, except in extremely rare circumstances.
- Such casts suppress normal type checking and usually the same problems as
- `type: ignore`.
-- Most other `noqa` suppressions.
-
-### Style Standards
-
-#### 1. Spacing and Delimiters
-
-- Space padding MUST be present inside delimiters.
- - Format: `( arg )`, `[ item ]`, `{ key: value }`
- - Format: `( )`, `[ ]`, `{ }`, not `()`, `[]`, `{}`
-- Space padding MUST be present around keyword argument `=`.
- - Format: `foo = 42`
-
-#### 2. Strings
-
-- Docstrings MUST use triple single quotes with narrative mood.
- - Format: `''' Processes data... '''` not `"""Process data..."""`
-- F-strings and `.format` strings MUST be enclosed in double quotes.
- - Format: `f"text {variable}"`, not `f'text {variable}'`
- - Format: `"text {count}".format( count = len( items ) )`
-- F-strings and format strings MUST NOT embed function calls.
-- Exception messages and log messages SHOULD be enclosed in double quotes
- rather than single quotes.
-- Plain data strings SHOULD be enclosed in single quotes, unless they contain
- single quotes.
-
-#### 3. Vertical Compactness
-
-- Blank lines MUST NOT appear within function bodies.
-- Vertical compactness MUST be maintained within function implementations.
-- Single-line statements MAY follow certain block keywords on the same line
- when appropriate.
- - Format: `if condition: return value`
- - Format: `elif condition: continue`
- - Format: `else: statement`
- - Format: `try: statement`
-
-#### 4. Multi-line Constructs
-
-- Function invocations, including class instantiations, SHOULD place the
- closing `)` on the same line as the last argument to the function.
-- The last argument of an invocation MUST NOT be followed by a trailing comma.
-- Comprehensions and generator expressions SHOULD place the closing delimiter
- on the same line as the last statement in the comprehension or generator
- expression.
-- Parenthetical groupings SHOULD place the closing delimiter on the same line
- as the last statement in the grouping.
-- All other multi-line constructs (functions signatures, annotations, lists,
- dictionaries, etc...) MUST place the closing delimiter on a separate line
- following the last item and MUST dedent the closing delimiter to match the
- opening line indentation.
-- If a closing delimiter is not on the same line as the last item in a
- multi-line construct, then the last item MUST be followed by a trailing
- comma.
-
-#### 5. Nomenclature
-
-- Argument, attribute, and variable names SHOULD NOT be compound words,
- separated by underscores, except in cases where this is necessary to
- disambiguate.
-- Argument and variable names SHOULD NOT duplicate parts of the function name.
-- Attribute names SHOULD NOT duplicate parts of the class name.
-- Class names SHOULD adhere to the nomenclature guide.
-- Function names SHOULD adhere to the nomenclature guide.
-
-#### 6. Comments
-
-- Comments that describe obvious behavior SHOULD NOT be included.
-- TODO comments SHOULD be added for uncovered edge cases and future work.
-- Comments MUST add meaningful context, not restate what the code does.
-
-### Comprehensive Example: Real-World Function with Multiple Violations
-
-Here is a function that demonstrates many compliance violations:
-
-```python
-def _group_documents_by_field(
- documents: list[ dict[ str, __.typx.Any ] ],
- field_name: __.typx.Optional[ str ]
-) -> dict[ str, list[ dict[ str, __.typx.Any ] ] ]:
- ''' Groups documents by specified field for inventory format compatibility.
- '''
- if field_name is None:
- return { }
-
- groups: dict[ str, list[ dict[ str, __.typx.Any ] ] ] = { }
- for doc in documents:
- # Get grouping value, with fallback for missing field
- group_value = doc.get( field_name, f'(missing {field_name})' )
- if isinstance( group_value, ( list, dict ) ):
- # Handle complex field types by converting to string
- group_value = str( group_value ) # type: ignore[arg-type]
- elif group_value is None or group_value == '':
- group_value = f'(missing {field_name})'
- else:
- group_value = str( group_value )
-
- if group_value not in groups:
- groups[ group_value ] = [ ]
-
- # Convert document format back to inventory object format
- inventory_obj = {
- 'name': doc[ 'name' ],
- 'role': doc[ 'role' ],
- 'domain': doc.get( 'domain', '' ),
- 'uri': doc[ 'uri' ],
- 'dispname': doc[ 'dispname' ]
- }
- if 'fuzzy_score' in doc:
- inventory_obj[ 'fuzzy_score' ] = doc[ 'fuzzy_score' ]
- groups[ group_value ].append( inventory_obj )
- return groups
-```
-
-Violations identified:
-1. **Narrow parameter types**: `list[dict[...]]` instead of wide `__.cabc.Sequence[__.cabc.Mapping[...]]`
-2. **Type suppression abuse**: `# type: ignore[arg-type]` masks real design issue
-3. **Mutable container return**: Returns `dict` instead of `__.immut.Dictionary`
-4. **Function body blank lines**: Empty lines breaking vertical compactness
-5. **Vertical compactness**: `return { }` could be same line as `if`
-6. **Unnecessary comments**: "Handle complex field types by converting to string" states obvious
-7. **F-string quotes**: Using single quotes in f-strings instead of double
-8. **Nomenclature duplication**: `group_value` repeats "group" from function name
-9. **Underscore nomenclature**: `field_name` could be `field`, `group_value` could be `value`
-10. **Mutable container creation**: Using `{ }` and `[ ]` instead of immutable alternatives
-11. **Trailing comma**: Missing trailing comma in dictionary, affecting delimiter placement
-12. **Single-line else**: `group_value = str(group_value)` could be same line as `else`
-13. **Design pattern**: Could use `collections.defaultdict` instead of manual initialization
-
-Corrected version:
-```python
-def _group_documents_by_field(
- documents: __.cabc.Sequence[ __.cabc.Mapping[ str, __.typx.Any ] ],
- field: __.typx.Absential[ str ] = __.absent,
-) -> __.immut.Dictionary[
- str, tuple[ __.cabc.Mapping[ str, __.typx.Any ], ... ]
-]:
- ''' Groups documents by specified field. '''
- if __.is_absent( field ): return __.immut.Dictionary( )
- groups = __.collections.defaultdict( list )
- for doc in documents:
- value = doc.get( field, f"(missing {field})" )
- if isinstance( value, ( list, dict ) ): value = str( value )
- elif value is None or value == '': value = f"(missing {field})"
- else: value = str( value )
- obj = __.immut.Dictionary(
- name = doc[ 'name' ],
- role = doc[ 'role' ],
- domain = doc.get( 'domain', '' ),
- uri = doc[ 'uri' ],
- dispname = doc[ 'dispname' ],
- **( { 'fuzzy_score': doc[ 'fuzzy_score' ] }
- if 'fuzzy_score' in doc else { } ) )
- groups[ value ].append( obj )
- return __.immut.Dictionary(
- ( key, tuple( items ) ) for key, items in groups.items( ) )
-```
-
-## Review Report Format
-
-Phase 1 Output:
-1. **Compliance Summary**: Overall assessment with file-by-file breakdown
-2. **Standards Violations**: Categorized list with specific line references and explanations
-3. **Complexity Analysis**: Function and module size assessments
-4. **Remediation Plan**: Systematic order of fixes to be applied
-5. **Risk Assessment**: Any changes that require careful validation
-
-Phase 2 Output:
-1. **Applied Fixes**: Summary of all changes made, categorized by standard
-2. **Validation Results**: Linter output before and after changes
-3. **Files Modified**: Complete list with brief description of changes
-4. **Manual Review Required**: Any issues requiring human judgment
-
-## Tool Preferences
-
-- **Precise coordinates**: Use `rg --line-number --column` for exact line/column positions
-- **File editing**: Prefer `text-editor` MCP tools for line-based edits to avoid conflicts
-- **File synchronization**: Always reread files with `text-editor` tools after modifications by other tools (like `pyright` or `ruff`)
-- **Batch operations**: Group related changes together to minimize file modification conflicts between different MCP tools
-
-## Conformance Process
-
-### 1. Analysis Phase (PHASE 1)
-- Examine target files to understand current state
-- Run linters to identify specific violations
-- Identify architectural patterns that need updating
-- Generate comprehensive compliance report
-- **Requirements**: Complete review and report before any remediation
-- **Focus**: Reference specific lines with concrete examples and explain reasoning
-
-### 2. Systematic Correction (PHASE 2)
-Apply fixes in systematic order:
-1. **Module Organization**: Reorder imports, type aliases, functions per practices guide
-2. **Wide/Narrow Types**: Convert function parameters to wide abstract types
-3. **Import Cleanup**: Remove namespace pollution, use private aliases and __ subpackage
-4. **Type Annotations**: Add missing hints, create `TypeAlias` for complex types
-5. **Exception Handling**: Narrow try block scope, ensure proper chaining
-6. **Immutability**: Replace mutable with immutable containers where appropriate
-7. **Spacing/Delimiters**: Fix `( )`, `[ ]`, `{ }` patterns
-8. **Docstrings**: Triple single quotes, narrative mood, proper spacing
-9. **Line Length**: Split at 79 columns using parentheses
-
-**Requirements**:
-- Maintain exact functionality while improving standards adherence
-- Validate with `hatch --env develop run linters` (must produce clean output)
-- Run `hatch --env develop run testers` to ensure no functionality breaks
-
-## Safety Requirements
-
-Stop and consult if:
-- Linters reveal complex architectural issues
-- Changes would alter functionality
-- Type annotations conflict with runtime behavior
-- Import changes break dependencies
-- Tests start failing
-
-Your responsibilities:
-- Maintain exact functionality while improving practices/style
-- Use project patterns consistently per the guides
-- Reference all three guides for complex cases
-- Verify all changes with linters and tests
-
-## Success Criteria
-
-- [ ] All linting violations resolved
-- [ ] Module organization follows practices guide structure
-- [ ] Function parameters use wide abstract types
-- [ ] Imports avoid namespace pollution
-- [ ] Type annotations comprehensive with `TypeAlias` usage
-- [ ] Exception handling uses narrow try blocks
-- [ ] Immutable containers used where appropriate
-- [ ] No functionality changes
-- [ ] Tests continue to pass
-- [ ] Code follows all style guide patterns
-
-**Note**: Always run full validation (`hatch --env develop run linters && hatch
---env develop run testers`) before considering the task complete.
-
-## Final Report
-
-Upon completion, provide a brief report covering:
-- Specific conformance issues corrected (categorized by the priority issues above)
-- Number of files modified
-- Any patterns that required manual intervention
-- Linter status before/after
-- Any deviations from guides and justification
diff --git a/.auxiliary/configuration/claude/commands/cs-conform-toml.md b/.auxiliary/configuration/claude/commands/cs-conform-toml.md
deleted file mode 100644
index d0f53c7..0000000
--- a/.auxiliary/configuration/claude/commands/cs-conform-toml.md
+++ /dev/null
@@ -1,280 +0,0 @@
----
-allowed-tools: Bash(git:*), LS, Read, Glob, Grep, Edit, MultiEdit, Write
-description: Systematically conform TOML files to project style and practice standards
----
-
-# TOML Configuration Conformance
-
-For bringing existing TOML configuration files into full compliance with project standards.
-
-Target files: `$ARGUMENTS`
-
-Focus on style/practice conformance, not functionality changes.
-
-## Prerequisites
-
-- Read project documentation guides first:
- - @documentation/common/practices.rst (TOML section)
- - @documentation/common/style.rst (TOML section)
- - @documentation/common/nomenclature.rst
-- Understand target files to be conformed
-- Have read `CLAUDE.md` for project-specific guidance
-
-## Context
-
-- Current git status: !`git status --porcelain`
-- Current branch: !`git branch --show-current`
-
-## Execution Structure
-
-**Phase 1: Comprehensive Review**
-Perform complete analysis and generate detailed compliance report before making any changes.
-
-**Phase 2: Systematic Remediation**
-Apply all identified fixes in systematic order, validating changes after completion.
-
-## Compliance Standards
-
-### Configuration Design Standards
-
-#### 1. Table Organization
-
-- Prefer table arrays with `name` fields over proliferating custom subtables.
-- Table arrays scale better and reduce configuration complexity.
-
-**❌ Avoid - custom subtables:**
-```toml
-[database]
-host = 'localhost'
-
-[database.primary]
-port = 5432
-timeout = 30
-
-[database.replica]
-port = 5433
-timeout = 15
-```
-
-**✅ Prefer - table arrays with name field:**
-```toml
-[[database]]
-name = 'primary'
-host = 'localhost'
-port = 5432
-timeout = 30
-
-[[database]]
-name = 'replica'
-host = 'localhost'
-port = 5433
-timeout = 15
-```
-
-#### 2. Key Naming Conventions
-
-- Use hyphens instead of underscores in key names for better ergonomics.
-- Apply nomenclature guidelines to key and table names.
-- Use Latin-derived words when they are the established norm in the domain.
-
-**❌ Avoid:**
-```toml
-max_connections = 100
-retry_count = 3
-database_url = 'postgresql://localhost/db'
-```
-
-**✅ Prefer:**
-```toml
-max-connections = 100
-retry-count = 3
-database-url = 'postgresql://localhost/db'
-```
-
-### Style Standards
-
-#### 1. String Values
-
-- Use single quotes for string values unless escapes are needed.
-- Use double quotes when escapes are required.
-- Use triple single quotes for multi-line strings (consistency with Python docstrings).
-
-**❌ Avoid:**
-```toml
-name = "example-service"
-description = "A service for processing data"
-pattern = "user-.*"
-```
-
-**✅ Prefer:**
-```toml
-name = 'example-service'
-description = 'A service for processing data'
-pattern = 'user-.*'
-
-# Use double quotes when escapes are needed
-windows-path = "C:\\Program Files\\Example"
-message = "Line 1\nLine 2"
-
-# Use triple single quotes for multi-line strings
-description = '''
-This is a longer description
-that spans multiple lines.
-'''
-```
-
-#### 2. Array and Table Formatting
-
-- Keep arrays and inline tables on single lines when they fit within reasonable length.
-- For longer arrays, place each element on its own line with proper indentation.
-
-**✅ Prefer:**
-```toml
-ports = [ 8080, 8443, 9090 ]
-database = { host = 'localhost', port = 5432 }
-
-# For longer arrays
-allowed-origins = [
- 'https://example.com',
- 'https://api.example.com',
- 'https://admin.example.com',
-]
-```
-
-### Comprehensive Example: Configuration with Multiple Violations
-
-Here is a TOML configuration that demonstrates many compliance violations:
-
-```toml
-[server_config]
-host_name = "localhost"
-port_number = 8080
-max_connections = 100
-
-[server_config.database_primary]
-host = "localhost"
-port = 5432
-connection_timeout = 30
-retry_attempts = 3
-
-[server_config.database_replica]
-host = "localhost"
-port = 5433
-connection_timeout = 15
-retry_attempts = 2
-
-allowed_hosts = ["https://example.com", "https://api.example.com", "https://admin.example.com"]
-
-description = "This is a multi-line description that explains what this service does and how it should be configured."
-```
-
-Violations identified:
-1. **Underscore key names**: `server_config`, `host_name`, `port_number`, `max_connections` should use hyphens
-2. **Custom subtables**: `[server_config.database_primary]` and `[server_config.database_replica]` should be table arrays
-3. **Double quotes**: String values using double quotes without escapes needed
-4. **Array formatting**: Long array on single line should be split across multiple lines
-5. **Multi-line string**: Long description should use triple single quotes
-
-Corrected version:
-```toml
-[[server-config]]
-name = 'main'
-host-name = 'localhost'
-port-number = 8080
-max-connections = 100
-
-[[database]]
-name = 'primary'
-host = 'localhost'
-port = 5432
-connection-timeout = 30
-retry-attempts = 3
-
-[[database]]
-name = 'replica'
-host = 'localhost'
-port = 5433
-connection-timeout = 15
-retry-attempts = 2
-
-allowed-hosts = [
- 'https://example.com',
- 'https://api.example.com',
- 'https://admin.example.com',
-]
-
-description = '''
-This is a multi-line description that explains what this service does
-and how it should be configured.
-'''
-```
-
-## Review Report Format
-
-Phase 1 Output:
-1. **Compliance Summary**: Overall assessment with file-by-file breakdown
-2. **Standards Violations**: Categorized list with specific line references and explanations
-3. **Configuration Analysis**: Table organization and key naming assessments
-4. **Remediation Plan**: Systematic order of fixes to be applied
-5. **Risk Assessment**: Any changes that require careful validation
-
-Phase 2 Output:
-1. **Applied Fixes**: Summary of all changes made, categorized by standard
-2. **Files Modified**: Complete list with brief description of changes
-3. **Manual Review Required**: Any issues requiring human judgment
-
-## Conformance Process
-
-### 1. Analysis Phase (PHASE 1)
-- Examine target files to understand current state
-- Identify configuration design patterns that need updating
-- Generate comprehensive compliance report
-- **Requirements**: Complete review and report before any remediation
-- **Focus**: Reference specific lines with concrete examples and explain reasoning
-
-### 2. Systematic Correction (PHASE 2)
-Apply fixes in systematic order:
-1. **Key Naming**: Convert underscores to hyphens in key names
-2. **Table Organization**: Convert custom subtables to table arrays with `name` fields
-3. **String Quoting**: Change double quotes to single quotes (unless escapes needed)
-4. **Multi-line Strings**: Convert to triple single quotes format
-5. **Array Formatting**: Split long arrays across multiple lines with proper indentation
-6. **Nomenclature**: Apply naming guidelines to keys and table names
-
-**Requirements**:
-- Maintain exact functionality while improving standards adherence
-- Validate that configuration files remain syntactically valid
-- Preserve all semantic meaning of configuration values
-
-## Safety Requirements
-
-Stop and consult if:
-- Configuration structure changes would alter application behavior
-- Complex nested configurations require architectural decisions
-- File contains domain-specific conventions that conflict with general guidelines
-- Syntax errors occur during modification
-
-Your responsibilities:
-- Maintain exact functionality while improving practices/style
-- Use project patterns consistently per the guides
-- Reference TOML documentation guides for complex cases
-- Verify all changes preserve configuration semantics
-
-## Success Criteria
-
-- [ ] All key names use hyphens instead of underscores
-- [ ] Custom subtables converted to table arrays where appropriate
-- [ ] String values use single quotes (double only when escapes needed)
-- [ ] Multi-line strings use triple single quotes
-- [ ] Long arrays are properly formatted across multiple lines
-- [ ] Nomenclature guidelines applied to keys and table names
-- [ ] No functionality changes to configuration behavior
-- [ ] Files remain syntactically valid TOML
-
-## Final Report
-
-Upon completion, provide a brief report covering:
-- Specific conformance issues corrected (categorized by the priority issues above)
-- Number of files modified
-- Any patterns that required manual intervention
-- Any deviations from guides and justification
\ No newline at end of file
diff --git a/.auxiliary/configuration/claude/commands/cs-create-command.md b/.auxiliary/configuration/claude/commands/cs-create-command.md
deleted file mode 100644
index d7ba98b..0000000
--- a/.auxiliary/configuration/claude/commands/cs-create-command.md
+++ /dev/null
@@ -1,108 +0,0 @@
----
-allowed-tools: Write, Read, LS
-description: Generate a new custom slash command with consistent structure and formatting
----
-
-# Generate Slash Command
-
-Generate a new custom slash command following established patterns for structure, tone, and formatting.
-
-Target: $ARGUMENTS
-
-**IMPORTANT**: You are creating slash commands for other Claude instances to execute. They will have no knowledge of:
-- The concept of "arguments" being passed to slash commands
-- The ARGUMENTS variable or its expansion
-- The meta-context of slash command generation
-- When creating content, avoid using the word "command" in titles or explanations - use terms like "process", "workflow", or "task" instead
-
-Your job is to interpret the user's request and create a complete, self-contained slash command.
-
-## Input Interpretation
-
-The user's request may take various forms:
-- Simple: `cs-analyze-performance`
-- Descriptive: `Named cs-inquire.md with a process outlined in .auxiliary/notes/inquire-command.md`
-- Reference-based: `Based on .auxiliary/notes/summarize-project-command.md`
-- Complex: `cs-update-deps that checks package.json and updates dependencies safely`
-
-Extract from the user's input:
-1. **Filename** (must start with `cs-`)
-2. **Purpose/functionality** (from description or referenced files)
-3. **Special requirements** (referenced processes, specific tools needed)
-
-## Context
-
-- Current custom commands: !`ls .claude/commands/cs-*.md 2>/dev/null || echo "No cs-* commands found"`
-- Referenced files (if any): Check for existence and read as needed
-- Command template: @.auxiliary/configuration/claude/miscellany/command-template.md
-
-## Prerequisites
-
-Before creating the slash command, ensure:
-- Clear understanding of the intended purpose
-- Filename follows `cs-*` naming pattern
-- No existing file with the same name
-- Any referenced process files are accessible
-
-## Generation Process
-
-### 1. Analyze User Request
-
-From the user's input, determine:
-- **Filename** (extract `cs-*.md` name)
-- **Purpose** (what should the generated slash command accomplish)
-- **Required tools** (based on functionality)
-- **Process details** (read any referenced files for specifics)
-
-### 2. Read Template Structure
-
-Read the template to get the base structure, then customize:
-- Replace placeholder content with appropriate descriptions
-- Customize sections based on purpose
-- Select appropriate allowed-tools
-- Add relevant @-references if applicable
-- Add checklists to sections if applicable
-
-### 3. Apply Formatting Standards
-
-**Professional Tone:**
-- Avoid making everything critical or important; no excessive
- attention-grabbing
-- Avoid excessive emphasis (no all-caps headers, minimal bold text)
-- Professional headers: `## Prerequisites` not `## MANDATORY PREREQUISITES`
-- Use "Stop and consult" for when user input should be solicited
-
-**Structure:**
-- Include Prerequisites section early in document
-- Include Context section with command expansions (exclamation point followed
- by command in backticks) for dynamic info when needed
-- Use @-references for local documentation when applicable
-- Provide clear Process Summary before detailed steps
-- Include Safety Requirements section for error handling
-
-### 4. Tool Selection
-
-Choose appropriate allowed-tools based on functionality:
-
-**Common tool combinations:**
-- **File operations**: `Write, Read, Edit, MultiEdit, LS, Glob, Grep`
-- **Git operations**: `Bash(git status), Bash(git add:*), Bash(git commit:*), Bash(git push:*)`
-- **Python development**: `Bash(hatch --env develop run:*), Bash(pytest:*), Bash(ruff:*)`
-- **GitHub operations**: `Bash(gh run list:*), Bash(gh run watch:*), Bash(gh pr create:*)`
-
-### 5. Generate and Write File
-
-1. **Read the template** from `.auxiliary/configuration/claude/miscellany/command-template.md`
-2. **Customize all sections** based on the specific purpose
-3. **Replace placeholders** with appropriate content for the target functionality
-4. **Write the final file** to `.claude/commands/[filename].md`
-
-
-### 6. Validation and Summary
-
-After generation:
-- Verify file structure matches established patterns
-- Check that allowed-tools are appropriate for the functionality
-- Ensure professional tone throughout (no excessive attention-grabbing, etc...)
-- Confirm all required sections are present and customized
-- Provide succinct summary of changes made to the user
diff --git a/.auxiliary/configuration/claude/commands/cs-design-python.md b/.auxiliary/configuration/claude/commands/cs-design-python.md
deleted file mode 100644
index 79d3921..0000000
--- a/.auxiliary/configuration/claude/commands/cs-design-python.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-allowed-tools: [Read, Write, Edit, MultiEdit, LS, Glob, Grep, WebFetch, WebSearch, Bash(ls:*), Bash(find:*), Bash(tree:*), mcp__context7__resolve-library-id, mcp__context7__get-library-docs]
-description: Python API design, filesystem organization, module structure, and interface specifications
----
-
-# Python Design Analysis
-
-Analyze Python API design patterns, filesystem organization, module structure, class hierarchies, interface definitions, and design patterns to provide guidance on Python-specific structural decisions and project organization.
-
-Request from user: $ARGUMENTS
-
-## Context
-
-- Architecture overview: @documentation/architecture/summary.rst
-- Filesystem patterns: @documentation/architecture/filesystem.rst
-- Python practices: @.auxiliary/instructions/practices.rst
-- Code style: @.auxiliary/instructions/style.rst
-- Nomenclature: @.auxiliary/instructions/nomenclature.rst
-- Germanic variants: @.auxiliary/instructions/nomenclature-germanic.rst
-- Design documents: !`ls documentation/architecture/designs/`
-
-## Prerequisites
-
-Before providing design analysis, ensure:
-- Understanding of current module organization and class hierarchies
-- Familiarity with Python practices and style guidelines
-- Knowledge of nomenclature conventions and naming patterns
-- @.auxiliary/instructions/practices.rst patterns are followed
-
-## Process Summary
-
-Key functional areas:
-1. **Design Analysis**: Examine current Python structure and design patterns
-2. **Interface Specification**: Define clean API boundaries and contracts
-3. **Module Organization**: Apply filesystem and import patterns effectively
-4. **Class Design**: Create maintainable hierarchies and interface patterns
-5. **Documentation**: Specify design decisions with examples and rationale
-
-## Safety Requirements
-
-Stop and consult the user if:
-- Architectural decisions are needed instead of design specifications
-- Implementation details are requested instead of design specifications
-- Requirements analysis is needed instead of design specifications
-- User requests actual code implementations instead of specifications
-- Design decisions require architectural changes beyond Python structure
-- Interface changes would break existing API contracts significantly
-- Design conflicts with established filesystem organization patterns
-- Requirements are unclear or insufficient for proper design specification
-- Multiple design approaches have significant trade-offs requiring user input
-
-## Execution
-
-Execute the following steps:
-
-### 1. Current Design Analysis
-Examine existing Python structure and patterns:
-- Review current module organization and import patterns
-- Analyze existing class hierarchies and interface definitions
-- Identify design patterns currently in use
-- Assess alignment with practices and nomenclature guidelines
-- Document current design strengths and improvement opportunities
-
-### 2. Interface Specification
-Define clean API boundaries and contracts following practices guidelines:
-- All function and class signatures must follow @.auxiliary/instructions/practices.rst patterns exactly
-- Specify public interfaces using wide parameter, narrow return patterns (e.g., __.cabc.Sequence, __.cabc.Mapping for inputs)
-- Return narrow concrete types (list, dict, tuple, __.immut.Dictionary for outputs)
-- Design class hierarchies following Omniexception → Omnierror patterns
-- Apply appropriate naming conventions from nomenclature guidelines
-- Define type annotations using proper TypeAlias patterns with __.typx.TypeAlias
-- Consider immutability preferences and container design patterns
-
-### 3. Filesystem and Module Organization Design
-Apply Python-specific organizational patterns and filesystem structure:
-- Design project filesystem organization and update filesystem.rst as needed
-- Design module structure following the standard organization order
-- Plan `__` subpackage integration for centralized imports
-- Specify exception hierarchies and their organization
-- Design interface patterns for different component types
-- Plan type alias organization and dependency management
-
-### 4. Class and Function Design
-Create maintainable Python structures following practices guide exactly:
-- Design class hierarchies with appropriate base classes and mixins (__.immut.Object, __.immut.Protocol, etc.)
-- Specify function signatures using practices guide patterns (wide inputs, narrow outputs, proper spacing)
-- Apply nomenclature patterns for methods, attributes, and functions from nomenclature guidelines
-- Design immutable data structures and container patterns
-- Plan dependency injection and configuration patterns with sensible defaults
-
-### 5. Design Documentation
-Create comprehensive design specifications without implementations:
-- Generate design documents following established format
-- Update `documentation/architecture/designs/index.rst` to include new designs
-- Provide only signatures, contracts, and interface specifications - no implementations
-- Do not provide exception class implementations, function bodies, or method implementations
-- Document interface contracts and expected behaviors (contracts only, not code)
-- Provide design examples using signatures and type annotations only
-- Specify exception handling patterns and error propagation (exception classes by name/signature only)
-- Document design rationale and trade-off decisions
-
-### 6. Design Validation
-Ensure design quality and consistency:
-- Verify alignment with practices, style, and nomenclature guidelines
-- Check consistency with filesystem organization patterns
-- Validate that wide parameter/narrow return patterns are followed
-- Ensure proper separation between public and private interfaces
-- Confirm that design supports expected usage patterns and extensibility
-
-### 7. Summarize Updates
-Provide concise summary of updates to the user.
\ No newline at end of file
diff --git a/.auxiliary/configuration/claude/commands/cs-develop-pytests.md b/.auxiliary/configuration/claude/commands/cs-develop-pytests.md
deleted file mode 100644
index 08798f7..0000000
--- a/.auxiliary/configuration/claude/commands/cs-develop-pytests.md
+++ /dev/null
@@ -1,239 +0,0 @@
----
-allowed-tools: Bash(hatch --env develop run:*), Bash(git status), Bash(git log:*), Bash(echo:*), Bash(ls:*), Bash(find:*), LS, Read, Glob, Grep, Write, Edit, MultiEdit, WebFetch
-description: Implement comprehensive Python tests following an existing test plan and project guidelines
----
-
-# Implement Python Tests
-
-For systematic test implementation following a pre-created test plan and project testing guidelines.
-
-Test plan path or special test-writing instructions: $ARGUMENTS
-
-Implement tests according to the provided test plan only.
-
-## Context
-
-- Current git status: !`git status --porcelain`
-- Current branch: !`git branch --show-current`
-- Test plan to implement: !`ls "$ARGUMENTS" 2>/dev/null && echo "Present" || echo "Missing"`
-- Existing test structure: !`find tests -name "*.py" | head -20`
-- Test organization: @documentation/architecture/testplans/summary.rst
-- Test plans index: @documentation/architecture/testplans/index.rst
-
-## Prerequisites
-
-Ensure that you:
-- Have a valid test plan document
-- Have verified access to target code modules referenced in the plan
-- Have read any relevant `CLAUDE.md` file
-- Understand the test-writing guidelines: @.auxiliary/instructions/tests.rst
-
-## Testing Principles (from project guidelines)
-
-**Core Principles:**
-1. **Dependency Injection Over Monkey-Patching**: Use injectable dependencies
- for testability
-2. **Performance-Conscious**: Prefer in-memory filesystems (pyfakefs) over temp
- directories
-3. **Avoid Monkey-Patching**: Never patch internal code; use dependency
- injection instead
-4. **100% Coverage Goal**: Aim for complete line and branch coverage
-5. **Test Behavior, Not Implementation**: Focus on observable behavior and
- contracts
-
-**Anti-Patterns to Avoid:**
-- Monkey-patching internal code (will fail with immutable objects)
-- Excessive mocking of internal components
-- Testing implementation details vs. behavior
-- Using temp directories when pyfakefs suffices
-
-**Organization:**
-- Follow the systematic numbering conventions detailed in the test guidelines
-
-## Safety Requirements
-
-Stop and consult the user if:
-- No test plan path is provided
-- Test plan cannot be read or is invalid
-- Plan conflicts with project testing principles
-- Implementation deviates from plan without justification
-- Implementation cannot follow the test plan as specified
-- Plan requires tests that violate project principles
-- Tests require monkey-patching internal code
-- Planned test numbering clashes with existing conventions
-- Required test fixtures or dependencies are unavailable
-- Test plan contains contradictions or unclear instructions
-
-**Your responsibilities:**
-- Follow the test plan precisely while adhering to project conventions
-- Use dependency injection patterns as specified in the plan
-- Implement tests exactly as planned without adding extras
-- Maintain systematic test numbering as outlined in the plan
-- Ensure tests validate behavior, not implementation
-- Document any necessary deviations from the plan with clear justification
-
-## Test Implementation Process
-
-Execute the following steps for test plan: `$ARGUMENTS`
-
-### 0. Pre-Flight Verification
-Verify access to project guidelines:
-
-Read and confirm you can access the complete project guidelines:
-- Testing: @.auxiliary/instructions/tests.rst
-- Practices: @.auxiliary/instructions/practices.rst
-- Style: @.auxiliary/instructions/style.rst
-
-You must successfully access and read all three guides before proceeding. If any guide cannot be accessed, stop and inform the user.
-
-### 1. Test Plan Reading and Validation
-Read and validate the provided test plan:
-
-Read the test plan document at the provided path:
-```
-Read the test plan file at: $ARGUMENTS
-```
-
-**Validate plan completeness:**
-- Verify plan contains coverage analysis summary
-- Confirm test strategy is clearly defined
-- Check that component-specific tests are detailed
-- Ensure implementation notes are present
-- Validate success metrics are specified
-
-Stop if the plan is incomplete, unclear, or missing critical sections.
-
-### 2. Plan Compliance Verification
-**Ensure plan aligns with project principles:**
-
-**Verify plan adheres to project testing guidelines:**
-- No monkey-patching of internal code required
-- Dependency injection patterns are viable
-- Test numbering follows project conventions
-- No external network testing planned
-
-**Check for conflicts with existing tests:**
-- Review planned test module names against existing files
-- Verify planned test function numbering doesn't conflict
-- Ensure no duplication of existing test coverage
-
-### 3. Test Data and Fixture Setup
-**Prepare test data as specified in the plan:**
-
-**Create required test data under tests/data/:**
-- Set up fake packages for extension mechanisms (if planned)
-- Prepare captured artifacts and snapshots (if planned)
-- Create any mock data files as specified in the plan
-
-Only create test data explicitly mentioned in the test plan.
-
-### 4. Test Module Creation/Updates
-**Implement test modules following the plan:**
-
-**For each planned test module:**
-- Create or update test files with planned naming (e.g., `test_100_exceptions.py`)
-- Follow planned test function numbering within modules
-- Implement only the tests specified in the plan
-- Use dependency injection patterns as outlined in the plan
-
-**Key Implementation Guidelines:**
-- Use dependency injection for all external dependencies as planned
-- Prefer `pyfakefs.Patcher()` for filesystem operations as specified
-- Mock only third-party services, never internal code
-- **Insert tests in numerical order within files** - do NOT append to end
-- **Write behavior-focused docstrings**: "Functionality is correct with Y" NOT "function_name does X with Y"
-- Follow existing naming conventions and code style
-- Implement tests in the exact order and numbering specified in the plan
-
-### 5. Coverage Validation
-**Verify implementation matches plan coverage goals:**
-```bash
-hatch --env develop run testers
-hatch --env develop run coverage report --show-missing
-```
-
-Verify plan compliance:
-- Run full test suite to ensure no regressions
-- Check that coverage matches the plan's target metrics
-- Verify all planned test functions are implemented
-- Confirm coverage gaps identified in the plan are addressed
-- Ensure no existing functionality is broken
-
-### 6. Code Quality Validation
-**Ensure implemented tests meet project standards:**
-```bash
-hatch --env develop run linters
-```
-
-**Requirements:**
-- All linting checks must pass
-- Note that the linters do not check style; you must verify style compliance
-- No violations of project coding standards
-- Test docstrings are clear and descriptive
-- Proper imports and dependencies
-- Implementation follows all conventions specified in the plan
-
-## Test Pattern Examples
-
-**Dependency Injection Pattern:**
-```python
-async def test_100_process_with_custom_processor( ):
- ''' Process function accepts custom processor via injection. '''
- def mock_processor( data ):
- return f"processed: {data}"
-
- result = await process_data( 'test', processor = mock_processor )
- assert result == "processed: test"
-```
-
-**Filesystem Operations (Preferred):**
-```python
-def test_200_config_file_processing( ):
- ''' Configuration files are processed correctly. '''
- with Patcher( ) as patcher:
- fs = patcher.fs
- fs.create_file( '/fake/config.toml', contents = '[section]\nkey="value"' )
- result = process_config_file( Path( '/fake/config.toml' ) )
- assert result.key == 'value'
-```
-
-**Error Handling:**
-```python
-def test_300_invalid_input_handling( ):
- ''' Invalid input raises appropriate exceptions. '''
- with pytest.raises( ValueError, match = "Invalid data format" ):
- process_invalid_data( "malformed" )
-```
-
-## Success Criteria
-
-Implementation is complete when:
-- [ ] All tests specified in the plan have been implemented
-- [ ] Coverage matches or exceeds the plan's target metrics
-- [ ] All planned test modules and functions are created with correct numbering
-- [ ] Test data and fixtures are set up as specified in the plan
-- [ ] All new tests pass consistently
-- [ ] No existing tests are broken
-- [ ] Linting passes without issues
-- [ ] Project coding practices and style have been followed
-- [ ] Tests follow project numbering conventions as planned
-- [ ] Tests are inserted in proper numerical order within files
-- [ ] Test docstrings focus on behavior, not function names
-- [ ] Dependency injection is used as specified in the plan
-- [ ] No monkey-patching of internal code
-- [ ] Performance-conscious patterns are applied as planned
-
-**Note**: Always run full validation (`hatch --env develop run linters && hatch
---env develop run testers`) before considering the task complete.
-
-## Final Report
-
-Upon completion, provide a brief report covering:
-- **Plan Compliance**: Confirmation that all planned tests were implemented as specified
-- **Coverage Achievement**: Final coverage percentages vs. plan targets
-- **Deviations from Plan**: Any necessary changes made to the plan during implementation with justification
-- **Technical Issues Resolved**: Any conflicts encountered and how they were resolved
-- **Pragma Directives Applied**: Any `# pragma: no cover` or `# pragma: no branch` added with rationale
-- **Test Data Created**: Summary of fixtures and test data files created under `tests/data/`
-- **Module Updates**: List of test modules created or updated with their numbering
-- **Code Quality**: Confirmation that tests are properly ordered and have behavior-focused docstrings
diff --git a/.auxiliary/configuration/claude/commands/cs-document-examples-rst.md b/.auxiliary/configuration/claude/commands/cs-document-examples-rst.md
deleted file mode 100644
index 0efbcb2..0000000
--- a/.auxiliary/configuration/claude/commands/cs-document-examples-rst.md
+++ /dev/null
@@ -1,115 +0,0 @@
----
-allowed-tools: [Read, Write, Edit, MultiEdit, Glob, Grep, LS, Bash(ls:*), Bash(find:*), Bash(hatch --env develop run:*), mcp__pyright__definition, mcp__pyright__references]
-description: Creates practical, testable examples documentation
----
-
-# Document Examples
-
-Develops practical, testable examples for documentation under
-`documentation/examples/` that increase test coverage while remaining relatable
-and succinct.
-
-Topic: $ARGUMENTS
-
-## Context
-
-- Project structure: @documentation/architecture/filesystem.rst
-- Existing examples: !`ls -la documentation/examples/ 2>/dev/null || echo "No examples directory"`
-- Code coverage data: !`hatch --env develop run testers 2>/dev/null || echo "No coverage data available"`
-
-## Prerequisites
-
-Before creating examples documentation:
-- Understand the target audience (developers vs end users)
-- Analyze existing codebase to identify core functionality patterns
-- Review existing examples for organization, completeness, and thematic inspiration
-- Examine @.auxiliary/instructions/ for style and nomenclature requirements
-
-## Process Summary
-
-Key functional areas:
-1. **Analysis**: Survey codebase and existing examples to identify documentation gaps
-2. **Theme Development**: Create coherent scenarios that demonstrate functionality progression
-3. **Content Creation**: Write succinct examples using proper reStructuredText formatting
-4. **Validation**: Ensure examples follow project practices and can serve as informal tests
-
-## Safety Requirements
-
-Stop and consult the user if:
-- Examples require creating contrived scenarios that don't reflect real usage
-- Multiple conflicting themes emerge without clear organizational strategy
-- Proposed examples would expose internal implementation details inappropriately
-- Documentation format conflicts with existing project conventions
-
-## Execution
-
-Execute the following steps:
-
-### 1. Analyze Existing Documentation Structure
-
-Survey the current documentation to understand patterns and identify gaps. Read
-existing example files to understand established themes and formatting
-approaches.
-
-### 2. Survey Codebase for Example Opportunities
-
-Identify public API surfaces and common usage patterns. Analyze coverage
-reports in `.auxiliary/artifacts/coverage-pytest` if available.
-
-Look for:
-- Public classes and functions that need demonstration
-- Common workflows that span multiple components
-- CLI commands and their typical usage patterns
-- Error handling scenarios that users should understand
-
-### 3. Develop Thematic Coherence
-
-Based on analysis, choose one of these organizational approaches:
-
-- **Domain scenarios**: Practical use cases
-- **API progression**: Basic to advanced usage of core functionality
-- **Workflow examples**: End-to-end processes showing component interaction
-- **CLI workflows**: Command sequences for common tasks
-
-### 4. Create Example Documentation
-
-Write examples following these requirements:
-
-- Use Sphinx reStructuredText format with proper double backticks for inline literals
-- Include blank lines before list items per reStructuredText conventions
-- Structure as progression from simple to complex scenarios
-- Use doctest format for Python API examples where testable
-- Use code-block format for CLI examples with explicit command annotation
-- Keep code blocks comment-free; put explanatory text between blocks
-- Follow @.auxiliary/instructions/practices.rst for code organization
-- Follow @.auxiliary/instructions/style.rst for formatting
-- Follow @.auxiliary/instructions/nomenclature.rst for naming
-
-### 5. Ensure Practical Relevance
-
-Verify each example:
-
-- Demonstrates functionality users actually need
-- Shows practical data and scenarios, remaining minimalist rather than elaborate
-- Includes appropriate error cases and edge conditions
-- Can serve as informal test coverage for documented features
-- Follows established project patterns for similar examples
-
-### 6. Validate Documentation Quality
-
-Review final documentation for:
-
-- Proper reStructuredText syntax and formatting
-- Consistent theme and progression across examples
-- Adherence to project style guidelines
-- Executable/testable nature of code examples
-- Clear explanatory text that guides readers through concepts
-
-### 7. Provide Summary
-
-Provide a succinct summary to the user describing:
-
-- What examples were created or updated
-- The organizational theme chosen and why
-- Key functionality areas covered
-- How the examples serve both documentation and testing goals
diff --git a/.auxiliary/configuration/claude/commands/cs-inquire.md b/.auxiliary/configuration/claude/commands/cs-inquire.md
deleted file mode 100644
index 9e8a639..0000000
--- a/.auxiliary/configuration/claude/commands/cs-inquire.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-allowed-tools: Read, LS, Glob, Grep, WebFetch, WebSearch
-description: Provide analytical responses and technical opinions without making code changes
----
-
-# Technical Analysis and Discussion
-
-Provide analytical responses, technical opinions, and architectural discussion
-based on user questions. Focus on analysis and reasoning without making code
-modifications.
-
-User question or topic: `$ARGUMENTS`
-
-Stop and consult if:
-- The request explicitly asks for code changes or implementation
-- The question is unclear or lacks sufficient context
-- Multiple conflicting requirements are presented
-
-## Prerequisites
-
-Before providing analysis, ensure:
-- Clear understanding of the technical question being asked
-- Sufficient context about the codebase or architecture being discussed
-
-## Process Summary
-
-Key analytical areas:
-1. **Question Analysis**: Understand what is being asked and why
-2. **Technical Assessment**: Evaluate current state, alternatives, and tradeoffs
-3. **Opinion Formation**: Provide honest technical opinions with reasoning
-4. **Discussion**: Present pros/cons, alternatives, and recommendations
-
-## Execution
-
-Execute the following process:
-
-### 1. Question Understanding
-Carefully analyze the user's question to understand:
-- What specific technical aspect they want to discuss
-- The context and scope of their concern
-- Whether they're seeking validation, alternatives, or general analysis
-
-### 2. Current State Assessment
-Examine relevant parts of the codebase or architecture, if necessary:
-- Read pertinent files to understand current implementation
-- Identify patterns, conventions, and existing approaches
-- Note any potential issues or areas of concern
-
-### 3. Technical Analysis
-Provide comprehensive analysis including:
-- **Strengths**: What works well in the current approach
-- **Weaknesses**: Potential issues, limitations, or concerns
-- **Alternatives**: Different approaches that could be considered
-- **Tradeoffs**: Benefits and costs of different options
-
-### 4. Opinion and Recommendations
-Offer honest technical opinions:
-- Present your assessment based on best practices and experience
-- Provide pushback if you disagree with assumptions or proposals
-- Suggest better alternatives when they exist
-- Explain the reasoning behind your recommendations
-
-### 5. Discussion Points
-Raise additional considerations:
-- Edge cases that might not have been considered
-- Long-term maintenance implications
-- Performance, security, or scalability concerns
-- Integration with existing systems or patterns
-
-Remember: Your role is to analyze, discuss, and provide technical opinions -
-not to implement solutions or make code changes. Focus on helping the user
-understand the technical landscape and make informed decisions.
diff --git a/.auxiliary/configuration/claude/commands/cs-manage-prd.md b/.auxiliary/configuration/claude/commands/cs-manage-prd.md
deleted file mode 100644
index f1dc369..0000000
--- a/.auxiliary/configuration/claude/commands/cs-manage-prd.md
+++ /dev/null
@@ -1,90 +0,0 @@
----
-allowed-tools: [Read, Write, Edit, MultiEdit, LS, Glob, Grep]
-description: Manage product requirements documents and feature planning
----
-
-# Product Requirements Management
-
-Manage and update the Product Requirements Document (PRD) based on user input
-about product requirements, feature planning, and related topics.
-
-Request from user: $ARGUMENTS
-
-## Context
-
-- Current PRD state: @documentation/prd.rst
-- Requirements guidelines: @.auxiliary/instructions/requirements.rst
-
-## Prerequisites
-
-Before managing PRD content, ensure:
-- Understanding of current project scope and objectives
-- Familiarity with existing functional and non-functional requirements
-- @.auxiliary/instructions/requirements.rst guidelines are followed
-- Changes align with overall project strategy
-
-## Process Summary
-
-Key functional areas:
-1. **Analysis**: Review current PRD and understand requested changes
-2. **Requirements Processing**: Apply requirements.rst standards to new content
-3. **PRD Updates**: Make structured updates to documentation/prd.rst
-4. **Validation**: Ensure consistency and completeness
-
-### Process Restrictions
-
-- Do not provide a timeline for deliverables.
-- Do not plan sprints.
-
-## Safety Requirements
-
-Stop and consult the user if:
-- Requested changes significantly expand or reduce product scope
-- New requirements conflict with existing non-functional requirements
-- Changes affect critical path features or constraints
-- Requirements lack sufficient detail for implementation planning
-
-## Execution
-
-Execute the following steps:
-
-### 1. Review Current State
-Read and analyze the existing PRD to understand current scope.
-
-### 2. Process User Requirements
-Analyze the user input for:
-- New functional requirements
-- Changes to existing requirements
-- Updates to goals, objectives, or success criteria
-- Modifications to user personas or target users
-- New constraints or assumptions
-
-### 3. Apply Requirements Standards
-Follow @.auxiliary/instructions/requirements.rst guidelines:
-- Use specific, measurable, achievable, relevant, testable criteria
-- Apply proper user story format when appropriate
-- Assign requirement priorities (Critical/High/Medium/Low)
-- Include acceptance criteria for functional requirements
-- Maintain requirement traceability
-
-### 4. Update PRD Structure
-Make targeted updates to appropriate PRD sections:
-- Executive Summary (if scope changes)
-- Problem Statement (if new problems identified)
-- Goals and Objectives (if success criteria change)
-- Target Users (if new personas or needs identified)
-- Functional Requirements (most common updates)
-- Non-Functional Requirements (if technical requirements change)
-- Constraints and Assumptions (if new limitations discovered)
-- Out of Scope (if boundaries need clarification)
-
-### 5. Maintain Consistency
-Ensure all updates maintain PRD coherence:
-- Requirements align with stated goals and objectives
-- No conflicts between functional and non-functional requirements
-- User stories trace back to identified user needs
-- Acceptance criteria are testable and specific
-- Priority assignments reflect user value
-
-### 6. Summarize Updates
-Provide concise summary of updates to the user.
diff --git a/.auxiliary/configuration/claude/commands/cs-obtain-instructions.md b/.auxiliary/configuration/claude/commands/cs-obtain-instructions.md
deleted file mode 100644
index 27a21f4..0000000
--- a/.auxiliary/configuration/claude/commands/cs-obtain-instructions.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-allowed-tools: Bash(curl:*), Bash(mkdir:*), LS, Read
-description: Download all project documentation guides locally for offline reference
----
-
-# Download Project Documentation Guides
-
-You need to download all project documentation guides to `.auxiliary/instructions/` for local reference.
-
-## Your Task
-
-1. **Create the local directory:**
- ```bash
- mkdir -p .auxiliary/instructions
- ```
-
-2. **Download all guides using curl (overwrite existing files):**
-
- Base URL: `https://raw.githubusercontent.com/emcd/python-project-common/refs/tags/docs-1/documentation/common/`
-
- **Download these files:**
- - `nomenclature.rst` - Naming conventions and terminology standards
- - `nomenclature-germanic.rst` - Conversion between Germanic-derived and Latin-derived nomenclature
- - `practices.rst` - Core development practices and architectural patterns
- - `style.rst` - Code formatting and stylistic conventions
- - `tests.rst` - Test development and validation patterns
-
- Use `curl` with `-o` flag to overwrite existing files in `.auxiliary/instructions/[filename]`
-
-3. **Verify the downloads:**
- - Check that all four files were created and have reasonable sizes
- - Briefly inspect content to ensure they're not error pages
- - Report what was downloaded successfully
-
-## Expected Outcome
-
-After completion:
-- All four guide files available locally in `.auxiliary/instructions/`
-- Other commands can use `@.auxiliary/instructions/practices.rst` instead of WebFetch
-- Faster, offline access to project documentation during conformance tasks
diff --git a/.auxiliary/configuration/claude/commands/cs-plan-pytests.md b/.auxiliary/configuration/claude/commands/cs-plan-pytests.md
deleted file mode 100644
index 9aa329c..0000000
--- a/.auxiliary/configuration/claude/commands/cs-plan-pytests.md
+++ /dev/null
@@ -1,262 +0,0 @@
----
-allowed-tools: Bash(hatch --env develop run:*), Bash(git status), Bash(git log:*), Bash(echo:*), Bash(ls:*), Bash(find:*), LS, Read, Glob, Grep, Write, Edit, WebFetch
-description: Analyze Python test coverage gaps and create comprehensive test implementation plan
----
-
-# Plan Python Tests
-
-For systematic analysis of test coverage gaps and creation of detailed test
-implementation plans following project testing guidelines.
-
-Target module/functionality: $ARGUMENTS
-
-Focus on analysis and planning only - do not implement tests.
-
-## Context
-
-- Current git status: !`git status --porcelain`
-- Current branch: !`git branch --show-current`
-- Current test coverage: !`hatch --env develop run coverage report --show-missing`
-- Existing test structure: !`find tests -name "*.py" | head -20`
-- Test organization: @documentation/architecture/testplans/summary.rst
-- Test plans index: @documentation/architecture/testplans/index.rst
-
-## Prerequisites
-
-Ensure that you:
-- Have access to target code modules for analysis
-- Can generate current coverage reports
-- Have read any relevant `CLAUDE.md` file
-- Understand the test-writing guidelines: @.auxiliary/instructions/tests.rst
-
-## Safety Requirements
-
-Stop and consult the user if:
-- No target module or functionality is provided
-- Target code cannot be analyzed
-- Coverage data is unavailable
-- Coverage reports cannot be generated
-- Target modules cannot be read or analyzed
-- Analysis reveals fundamental testability issues
-- Test guidelines cannot be accessed
-- Network tests against real external sites are being considered
-
-**Your responsibilities:**
-- Focus entirely on analysis and planning - NO implementation
-- Create comprehensive, actionable test plans WITHOUT code snippets of test implementations
-- Brief third-party library examples (e.g., httpx mock transport) are acceptable if researched
-- Identify all coverage gaps systematically
-- Consider project testing principles in planning
-- Produce clear, structured planning artifacts
-- Acknowledge immutability constraints - modules under test CANNOT be monkey-patched
-- Test private functions/methods via public API - understand why if this fails
-
-## Test Planning Process
-
-Execute the following steps for target: `$ARGUMENTS`
-
-### 0. Pre-Flight Verification
-Access test-writing guidelines:
-
-Read and understand the complete testing guidelines:
-@.auxiliary/instructions/tests.rst
-
-You must successfully access and understand the guide before proceeding. If the guide cannot be accessed, stop and inform the user.
-
-### 1. Coverage Analysis Phase
-
-**Generate and analyze current coverage data:**
-
-```bash
-hatch --env develop run coverage report --show-missing
-hatch --env develop run coverage html
-```
-
-Analysis requirements:
-- Identify all uncovered lines in target modules
-- Analyze which functions/classes lack any tests
-- Determine which code paths are partially covered
-- Note any pragma directives (# pragma: no cover) and their rationale
-
-**For each target module:**
-- Read the source code to understand the public API
-- Identify all functions, classes, and methods
-- Map uncovered lines to specific functionality
-- Note dependency injection points and testability patterns
-
-### 2. Gap Identification Phase
-
-**Systematically catalog what needs testing:**
-
-**Functionality Gaps:**
-- Functions with zero test coverage
-- Classes with untested methods
-- Error handling paths not exercised
-- Edge cases not covered
-
-**Coverage Gaps:**
-- Specific line numbers needing coverage
-- Branch conditions not tested
-- Exception handling paths missed
-- Integration scenarios untested
-
-**Architecture Gaps:**
-- Code that requires dependency injection for testability
-- Components that need filesystem mocking
-- External service interactions requiring test doubles
-- Private functions/methods not exercisable via public API
-- Areas where full coverage may require violating immutability constraints
-- Test data requirements (fixtures, snapshots, fake packages for `tests/data/`)
-
-### 3. Test Strategy Development
-
-**For each identified gap, determine:**
-
-**Test Approach:**
-- Which testing patterns apply (dependency injection, pyfakefs, etc.)
-- What test doubles or fixtures are needed
-- How to structure tests for maximum coverage
-
-**Test Categories:**
-- Basic functionality tests (000-099 range)
-- Component-specific tests (100+ blocks per function/class/method)
-- Edge cases and error handling (integrated within component blocks)
-
-**Implementation Considerations:**
-- Dependencies that need injection
-- Filesystem operations requiring pyfakefs
-- External services needing mocking (NEVER test against real external sites)
-- Test data and fixtures needed under `tests/data/`
-- Performance considerations
-
-### 4. Test Organization Planning
-
-**Determine test structure and numbering:**
-
-**Review existing test numbering conventions:**
-- Analyze current test file naming patterns
-- Identify next available number blocks for new test modules
-- Plan numbering for new test functions within modules
-
-Test module vs function numbering:
-- **Test modules**: Named as `test_00_.py` (e.g., `test_100_exceptions.py`, `test_500_cli.py`)
-- **Test functions**: Within modules use 000-099 basic, 100+ blocks per component
-- These are DIFFERENT numbering schemes - do not confuse them
-
-**Test Module Numbering Hierarchy:**
-- Lower-level functionality gets lower numbers (e.g., `test_100_exceptions.py`, `test_110_utilities.py`)
-- Higher-level functionality gets higher numbers (e.g., `test_500_cli.py`, `test_600_server.py`)
-- Subpackage modules: `test_0__.py` where N advances by 10 within subpackage
-
-**Update test organization documentation:**
-- Update `documentation/architecture/testplans/summary.rst` with test module numbering scheme
-- Include project-specific testing conventions and new modules being planned
-- Document rationale for any pattern exceptions
-- Update during planning, not during implementation
-
-### 5. Plan Documentation Creation
-
-**Create comprehensive test plan document:**
-
-Save the plan to `documentation/architecture/testplans/[sanitized-module-name].rst` and update `documentation/architecture/testplans/index.rst` to include the new test plan in the toctree.
-
-Create the test plan document with:
-
-**Plan Structure (reStructuredText format):**
-```rst
-*******************************************************************************
-Test Plan: [Module Name]
-*******************************************************************************
-
-Coverage Analysis Summary
-===============================================================================
-
-- Current coverage: X%
-- Target coverage: 100%
-- Uncovered lines: [specific line numbers]
-- Missing functionality tests: [list]
-
-Test Strategy
-===============================================================================
-
-Basic Functionality Tests (000-099)
--------------------------------------------------------------------------------
-
-- [List planned tests with brief descriptions]
-
-Component-Specific Tests (100+ blocks)
--------------------------------------------------------------------------------
-
-Function/Class/Method: [name] (Tests 100-199)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-- [Planned test descriptions including happy path, edge cases, and error handling]
-- [Dependencies needing injection]
-- [Special considerations]
-
-Implementation Notes
-===============================================================================
-
-- Dependencies requiring injection: [list]
-- Filesystem operations needing pyfakefs: [list]
-- External services requiring mocking: [list - NEVER test against real external sites]
-- Test data and fixtures: [needed under tests/data/ - fake packages, snapshots, captured artifacts]
-- Private functions/methods not testable via public API: [list with analysis]
-- Areas requiring immutability constraint violations: [list with recommendations]
-- Third-party testing patterns to research: [e.g., httpx mock transport]
-- Test module numbering for new files: [following hierarchy conventions]
-- Anti-patterns to avoid: [specific warnings including external network calls]
-
-Success Metrics
-===============================================================================
-
-- Target line coverage: [percentage]
-- Branch coverage goals: [percentage]
-- Specific gaps to close: [line numbers]
-```
-
-### 6. Plan Validation
-
-**Review and validate the plan:**
-
-**Completeness Check:**
-- All uncovered lines addressed
-- All functions/classes have test strategy
-- Error paths and edge cases included
-- Integration scenarios covered
-
-**Feasibility Check:**
-- All planned tests align with project principles
-- No monkey-patching of internal code required
-- Dependency injection patterns are viable
-- Performance considerations addressed
-
-**Numbering Check:**
-- Test numbering follows project conventions
-- No conflicts with existing test numbers
-- Logical organization by test type
-
-## Success Criteria
-
-Planning is complete when:
-- [ ] Complete coverage analysis performed
-- [ ] All testing gaps systematically identified
-- [ ] Test strategy developed for each gap
-- [ ] Test organization and numbering planned
-- [ ] `documentation/architecture/testplans/summary.rst` updated as needed
-- [ ] Comprehensive plan document created in testplans directory
-- [ ] `documentation/architecture/testplans/index.rst` updated to include new plan
-- [ ] Plan validates against project testing principles
-- [ ] Implementation approach is clear and actionable
-
-## Final Report
-
-Upon completion, provide a brief summary covering:
-- Current coverage percentage and specific gaps identified
-- Number of new tests planned by category
-- Key architectural considerations (dependency injection needs, etc.)
-- Assessment: Areas where 100% coverage may be impossible without violating immutability constraints
-- **PUSHBACK RECOMMENDATIONS**: Suggested architectural improvements to enable better testability
-- Private functions/methods that cannot be exercised via public API and analysis of why
-- Estimated complexity and implementation priority
-- Any potential challenges or special considerations
diff --git a/.auxiliary/configuration/claude/commands/cs-release-checkpoint.md b/.auxiliary/configuration/claude/commands/cs-release-checkpoint.md
deleted file mode 100644
index 469da51..0000000
--- a/.auxiliary/configuration/claude/commands/cs-release-checkpoint.md
+++ /dev/null
@@ -1,161 +0,0 @@
----
-allowed-tools: Bash(git status), Bash(git pull:*), Bash(git add:*), Bash(git commit:*), Bash(git tag:*), Bash(git push:*), Bash(gh run list:*), Bash(gh run watch:*), Bash(hatch version:*), Bash(hatch --env develop run:*), Bash(echo:*), Bash(ls:*), Bash(grep:*), Bash(date:*), LS, Read
-description: Execute automated alpha checkpoint release with QA monitoring
----
-
-# Release Checkpoint
-
-**NOTE: This is an experimental workflow! If anything seems unclear or missing,
-please stop for consultation with the user.**
-
-For execution of an automated alpha checkpoint release on master branch.
-
-Below is a validated process to create an alpha checkpoint release with automated
-monitoring and version increment.
-
-Target alpha increment: `$ARGUMENTS` (optional - defaults to next alpha)
-
-Verify current version is alpha format if no arguments provided.
-
-Stop and consult if:
-- Working directory has uncommitted changes
-- Current version is not an alpha version (e.g., 1.3.0, 1.3rc1) and no target specified
-- Git operations fail or produce unexpected output
-
-## Context
-
-- Current git status: !`git status`
-- Current branch: !`git branch --show-current`
-- Current version: !`hatch version`
-- Recent commits: !`git log --oneline -10`
-
-## Prerequisites
-
-Before starting, ensure:
-- GitHub CLI (`gh`) is installed and authenticated
-- Working directory is clean with no uncommitted changes
-- Currently on master branch
-- Current version is an alpha version (e.g., 1.3a0)
-
-## Process Summary
-
-Key functional areas of the process:
-
-1. **Pre-Release Quality Check**: Run local QA to catch issues early
-2. **Changelog Generation**: Run Towncrier to build changelog
-3. **QA Monitoring**: Push commits and monitor QA workflow with GitHub CLI
-4. **Tag Release**: Create alpha tag with current version after QA passes
-5. **Release Monitoring**: Monitor release workflow deployment
-6. **Post-Release Cleanup**: Remove news fragments and bump alpha version
-
-## Safety Requirements
-
-Stop and consult the user if any of the following occur:
-
-- **Step failures**: If any command fails, git operation errors, or tests fail
-- **Workflow failures**: If QA or release workflows show failed jobs
-- **Unexpected output**: If commands produce unclear or concerning results
-- **Version conflicts**: If version bumps don't match expected patterns
-- **Network issues**: If GitHub operations timeout or fail repeatedly
-
-**Your responsibilities**:
-- Validate each step succeeds before proceeding to the next
-- Monitor workflow status and halt on any failures
-- Provide clear progress updates throughout the process
-- Maintain clean git hygiene
-- Use your judgment to assess when manual intervention is needed
-
-## Release Process
-
-Execute the following steps:
-
-### 1. Pre-Release Quality Check
-Run local quality assurance to catch issues early:
-```bash
-git status && git pull origin master
-hatch --env develop run linters
-hatch --env develop run testers
-hatch --env develop run docsgen
-```
-
-### 2. Changelog Generation
-Run Towncrier to update changelog with current version:
-```bash
-hatch --env develop run towncrier build --keep --version $(hatch version)
-git commit -am "Update changelog for v$(hatch version) release."
-```
-
-### 3. Quality Assurance Phase
-Push commits and monitor QA workflow:
-```bash
-git push origin master
-```
-
-Workflow monitoring requirements:
-After pushing, you MUST ensure you monitor the correct QA workflow run:
-
-1. **Wait for workflow trigger**: Wait 10 seconds after pushing to allow GitHub to trigger the workflow
-2. **Verify correct workflow**: Use `gh run list --workflow=qa --limit=5` to list recent runs
-3. **Check timestamps**: Compare the workflow creation time with your push time using `date --utc`
-4. **Ensure fresh run**: Only monitor a workflow run that was created AFTER your push timestamp
-5. **If no new run appears**: Wait additional time and check again - do NOT assume an old completed run is your workflow
-
-Once you've identified the correct QA run ID:
-```bash
-gh run watch --interval 30 --compact
-```
-
-Do not proceed until workflow completes:
-- Monitor QA workflow with `gh run watch` using the correct run ID
-- Use `timeout: 300000` (5 minutes) parameter in Bash tool for monitoring commands
-- If command times out, immediately rerun `gh run watch` until completion
-- Only proceed to next step after seeing "✓ [workflow-name] completed with 'success'"
-- Stop if any jobs fail - consult user before proceeding
-
-### 4. Alpha Release Deployment
-**Verify QA passed before proceeding to alpha tag:**
-```bash
-git tag -m "Alpha checkpoint v$(hatch version)." v$(hatch version)
-git push --tags
-```
-
-Release workflow monitoring requirements:
-After pushing the tag, you MUST ensure you monitor the correct release workflow run:
-
-1. **Wait for workflow trigger**: Wait 10 seconds after pushing tags to allow GitHub to trigger the release workflow
-2. **Verify correct workflow**: Use `gh run list --workflow=release --limit=5` to list recent runs
-3. **Check timestamps**: Compare the workflow creation time with your tag push time using `date --utc`
-4. **Ensure fresh run**: Only monitor a workflow run that was created AFTER your tag push timestamp
-5. **If no new run appears**: Wait additional time and check again - do NOT assume an old completed run is your workflow
-
-Once you've identified the correct release run ID:
-```bash
-gh run watch --interval 30 --compact
-```
-
-Do not proceed until workflow completes:
-- Monitor release workflow with `gh run watch` using the correct run ID
-- Use `timeout: 600000` (10 minutes) parameter in Bash tool for monitoring commands
-- If command times out, immediately rerun `gh run watch` until completion
-- Only proceed to next step after seeing "✓ [workflow-name] completed with 'success'"
-- Stop if any jobs fail - consult user before proceeding
-
-### 5. Post-Release Cleanup
-Clean up Towncrier fragments:
-```bash
-git rm .auxiliary/data/towncrier/*.rst
-git commit -m "Clean up news fragments."
-```
-
-### 6. Next Alpha Version
-Bump to next alpha version:
-```bash
-hatch version alpha
-git commit -am "Version: $(hatch version)"
-```
-
-### 7. Final Push
-Push cleanup and version bump commits:
-```bash
-git push origin master
-```
\ No newline at end of file
diff --git a/.auxiliary/configuration/claude/commands/cs-release-final.md b/.auxiliary/configuration/claude/commands/cs-release-final.md
deleted file mode 100644
index ea400a4..0000000
--- a/.auxiliary/configuration/claude/commands/cs-release-final.md
+++ /dev/null
@@ -1,194 +0,0 @@
----
-allowed-tools: Bash(git status), Bash(git pull:*), Bash(git checkout:*), Bash(git add:*), Bash(git commit:*), Bash(git tag:*), Bash(git rm:*), Bash(git cherry-pick:*), Bash(git log:*), Bash(git branch:*), Bash(gh run list:*), Bash(gh run watch:*), Bash(hatch version:*), Bash(hatch --env develop run:*), Bash(echo:*), Bash(ls:*), Bash(grep:*), LS, Read
-description: Execute automated final release with QA monitoring and development cycle setup
----
-
-# Release Final
-
-**NOTE: This is an experimental workflow! If anything seems unclear or missing,
-please stop for consultation with the user.**
-
-For execution of a fully-automated final release.
-
-Below is a validated process to create a final release with automated
-monitoring and next development cycle setup.
-
-Target release version: `$ARGUMENTS`
-
-Verify exactly one target release version provided.
-
-Stop and consult if:
-- No target release version is provided
-- Multiple release versions provided (e.g., `1.6 foo bar`)
-- Release version format doesn't match `X.Y` pattern (e.g., `1.6.2`, `1.6a0`)
-
-## Context
-
-- Current git status: !`git status`
-- Current branch: !`git branch --show-current`
-- Current version: !`hatch version`
-- Recent commits: !`git log --oneline -10`
-- Available towncrier fragments: !`ls .auxiliary/data/towncrier/*.rst 2>/dev/null || echo "No fragments found"`
-
-## Prerequisites
-
-Before starting, ensure:
-- GitHub CLI (`gh`) is installed and authenticated
-- For new releases: All changes are committed to `master` branch
-- For existing release branches: Release candidate has been validated and tested
-- Working directory is clean with no uncommitted changes
-- Towncrier news fragments are present for the release enhancements
-
-## Process Summary
-
-Key functional areas of the process:
-
-1. **Branch Setup**: Create new release branch or checkout existing one
-2. **Version Bump**: Set version to final release (major/minor/patch as appropriate)
-3. **Update Changelog**: Run Towncrier to build final changelog
-4. **QA Monitoring**: Push commits and monitor QA workflow with GitHub CLI
-5. **Tag Release**: Create signed git tag after QA passes
-6. **Release Monitoring**: Monitor release workflow deployment
-7. **Cleanup**: Remove news fragments and cherry-pick back to master
-8. **Next Development Cycle**: Set up master branch for next development version
-
-## Safety Requirements
-
-Stop and consult the user if any of the following occur:
-
-- **Step failures**: If any command fails, git operation errors, or tests fail
-- **Workflow failures**: If QA or release workflows show failed jobs
-- **Unexpected output**: If commands produce unclear or concerning results
-- **Version conflicts**: If version bumps don't match expected patterns
-- **Network issues**: If GitHub operations timeout or fail repeatedly
-
-**Your responsibilities**:
-- Validate each step succeeds before proceeding to the next
-- Monitor workflow status and halt on any failures
-- Provide clear progress updates throughout the process
-- Maintain clean git hygiene and proper branching
-- Use your judgment to assess when manual intervention is needed
-
-## Release Process
-
-Execute the following steps for target version `$ARGUMENTS`:
-
-### 1. Pre-Release Quality Check
-Run local quality assurance to catch issues early:
-```bash
-git status && git pull origin master
-hatch --env develop run linters
-hatch --env develop run testers
-hatch --env develop run docsgen
-```
-
-### 2. Release Branch Setup
-Determine release branch name from target version (e.g., `1.6` → `release-1.6`).
-
-**If release branch exists** (for RC→final conversion):
-```bash
-git checkout release-$ARGUMENTS
-git pull origin release-$ARGUMENTS
-```
-
-**If creating new release branch**:
-```bash
-git checkout master && git pull origin master
-git checkout -b release-$ARGUMENTS
-```
-
-### 3. Version Management
-Set version to target release version:
-```bash
-hatch version $ARGUMENTS
-git commit -am "Version: $(hatch version)"
-```
-
-### 4. Changelog Generation
-```bash
-hatch --env develop run towncrier build --keep --version $(hatch version)
-git commit -am "Update changelog for v$(hatch version) release."
-```
-
-### 5. Quality Assurance Phase
-Push branch and monitor QA workflow:
-```bash
-# Use -u flag for new branches, omit for existing
-git push [-u] origin release-$ARGUMENTS
-```
-
-Workflow monitoring requirements:
-After pushing, you MUST ensure you monitor the correct QA workflow run:
-
-1. **Wait for workflow trigger**: Wait 10 seconds after pushing to allow GitHub to trigger the workflow
-2. **Verify correct workflow**: Use `gh run list --workflow=qa --limit=5` to list recent runs
-3. **Check timestamps**: Compare the workflow creation time with your push time using `date --utc`
-4. **Ensure fresh run**: Only monitor a workflow run that was created AFTER your push timestamp
-5. **If no new run appears**: Wait additional time and check again - do NOT assume an old completed run is your workflow
-
-Once you've identified the correct QA run ID:
-```bash
-gh run watch --interval 30 --compact
-```
-
-Do not proceed until workflow completes:
-- Monitor QA workflow with `gh run watch` using the correct run ID
-- Use `timeout: 300000` (5 minutes) parameter in Bash tool for monitoring commands
-- If command times out, immediately rerun `gh run watch` until completion
-- Only proceed to next step after seeing "✓ [workflow-name] completed with 'success'"
-- Stop if any jobs fail - consult user before proceeding
-
-### 6. Release Deployment
-**Verify QA passed before proceeding to release tag:**
-```bash
-git tag -m "Release v$(hatch version): ." v$(hatch version)
-git push --tags
-```
-
-Release workflow monitoring requirements:
-After pushing the tag, you MUST ensure you monitor the correct release workflow run:
-
-1. **Wait for workflow trigger**: Wait 10 seconds after pushing tags to allow GitHub to trigger the release workflow
-2. **Verify correct workflow**: Use `gh run list --workflow=release --limit=5` to list recent runs
-3. **Check timestamps**: Compare the workflow creation time with your tag push time using `date --utc`
-4. **Ensure fresh run**: Only monitor a workflow run that was created AFTER your tag push timestamp
-5. **If no new run appears**: Wait additional time and check again - do NOT assume an old completed run is your workflow
-
-Once you've identified the correct release run ID:
-```bash
-gh run watch --interval 30 --compact
-```
-
-Do not proceed until workflow completes:
-- Monitor release workflow with `gh run watch` using the correct run ID
-- Use `timeout: 600000` (10 minutes) parameter in Bash tool for monitoring commands
-- If command times out, immediately rerun `gh run watch` until completion
-- Only proceed to next step after seeing "✓ [workflow-name] completed with 'success'"
-- Stop if any jobs fail - consult user before proceeding
-
-### 7. Post-Release Cleanup
-```bash
-git rm .auxiliary/data/towncrier/*.rst
-git commit -m "Clean up news fragments."
-git push origin release-$ARGUMENTS
-```
-
-### 8. Master Branch Integration
-Cherry-pick release commits back to master:
-```bash
-git checkout master && git pull origin master
-git cherry-pick
-git cherry-pick
-git push origin master
-```
-
-### 9. Next Development Cycle (Major/Minor Releases Only)
-Set up next development version:
-```bash
-hatch version minor,alpha
-git commit -am "Start of development for release $(hatch version | sed 's/a[0-9]*$//')."
-git tag -m "Start of development for release $(hatch version | sed 's/a[0-9]*$//')." "i$(hatch version | sed 's/a[0-9]*$//')"
-git push origin master --tags
-```
-
-**Note**: Use `git log --oneline` to identify commit hashes for cherry-picking.
diff --git a/.auxiliary/configuration/claude/commands/cs-release-maintenance.md b/.auxiliary/configuration/claude/commands/cs-release-maintenance.md
deleted file mode 100644
index 8ea8282..0000000
--- a/.auxiliary/configuration/claude/commands/cs-release-maintenance.md
+++ /dev/null
@@ -1,236 +0,0 @@
----
-allowed-tools: Bash(git status), Bash(git pull:*), Bash(git checkout:*), Bash(git commit:*), Bash(git tag:*), Bash(git rm:*), Bash(git cherry-pick:*), Bash(git log:*), Bash(git branch:*), Bash(gh run list:*), Bash(gh run watch:*), Bash(hatch version:*), Bash(hatch --env develop run:*), Bash(echo:*), Bash(ls:*), Bash(grep:*), LS, Read
-description: Execute automated patch release with QA monitoring and master integration
----
-
-# Release Patch
-
-**NOTE: This is an experimental workflow! If anything seems unclear or missing,
-please stop for consultation with the user.**
-
-For execution of a fully-automated postrelease patch.
-
-Below is a validated process to create patch releases with automated monitoring
-and clean integration back to master.
-
-Target release version: `$ARGUMENTS` (e.g., `1.24`, `2.3`)
-
-Verify exactly one target release version provided.
-
-Stop and consult if:
-- No target release version is provided
-- Multiple release versions provided (e.g., `1.6 foo bar`)
-- Release version format doesn't match `X.Y` pattern (e.g., `1.6.2`, `1.6a0`)
-
-## Context
-
-- Current git status: !`git status`
-- Current branch: !`git branch --show-current`
-- Current version: !`hatch version`
-- Recent commits: !`git log --oneline -10`
-- Available towncrier fragments: !`ls .auxiliary/data/towncrier/*.rst 2>/dev/null || echo "No fragments found"`
-- Target release branch status: !`git branch -r | grep release-$ARGUMENTS || echo "Release branch not found"`
-
-## Prerequisites
-
-Before running this command, ensure:
-- GitHub CLI (`gh`) is installed and authenticated
-- Release branch exists for the target version (e.g., `release-1.24` for version `1.24`)
-- Working directory is clean with no uncommitted changes
-- Towncrier news fragments are present for the patch changes
-
-## Process Summary
-
-Key functional areas of the process:
-
-1. **Branch Setup**: Checkout and update the appropriate release branch
-2. **Version Bump**: Increment to next patch version with `hatch version patch`
-3. **Update Changelog**: Run Towncrier to build patch changelog
-4. **QA Monitoring**: Push commits and monitor QA workflow with GitHub CLI
-5. **Tag Release**: Create signed git tag after QA passes
-6. **Release Monitoring**: Monitor release workflow deployment
-7. **Cleanup**: Remove news fragments and cherry-pick back to master
-
-## Safety Requirements
-
-Stop and consult the user if any of the following occur:
-
-- **Step failures**: If any command fails, git operation errors, or tests fail
-- **Workflow failures**: If QA or release workflows show failed jobs
-- **Version conflicts**: If patch version doesn't match expected patterns
-- **Branch issues**: If release branch doesn't exist or is in unexpected state
-- **Network issues**: If GitHub operations timeout or fail repeatedly
-
-**Your responsibilities**:
-- Validate each step succeeds before proceeding to the next
-- Monitor workflow status and halt on any failures
-- Provide clear progress updates throughout the process
-- Maintain clean git hygiene and proper branching
-- Use your judgment to assess when manual intervention is needed
-
-## Release Process
-
-Execute the following steps for target release version `$ARGUMENTS`:
-
-### 1. Pre-Release Quality Check
-Run local quality assurance to catch issues early:
-```bash
-git status && git pull origin master
-hatch --env develop run linters
-hatch --env develop run testers
-hatch --env develop run docsgen
-```
-
-### 2. Release Branch Setup
-Checkout the target release branch:
-```bash
-git checkout release-$ARGUMENTS
-git pull origin release-$ARGUMENTS
-```
-
-### 3. Patch Integration
-**Determine patch location and integrate if needed:**
-
-### 3.1. Identify Patch Commits
-Before cherry-picking, identify which commits contain actual patch fixes vs. maintenance:
-
-```bash
-git log --oneline master
-git log --graph --oneline master --since="1 month ago"
-# Show commits on master not on release branch
-git log --oneline release-$ARGUMENTS..master --since="1 month ago"
-```
-
-**IMPORTANT**
-- Do **not** cherry-pick commits which were previously cherry-picked onto the
- branch.
-- Look at the Towncrier news fragments to help you decide what to pick.
-
-**Patch commits** (always cherry-pick):
-- Bug fixes
-- Security patches
-- Critical functionality fixes
-
-**Maintenance commits** (evaluate case-by-case):
-- Template updates
-- Dependency bumps
-- Documentation changes
-
-Use `git show ` to review each commit's content before deciding.
-
-**If patches were developed on master** (cherry-pick to release branch):
-```bash
-# Cherry-pick patch commits from master to release branch
-# Use git log --oneline master to identify relevant commit hashes
-git cherry-pick
-git cherry-pick
-# Repeat for all patch commits
-```
-
-**If patches were developed on release branch**: Skip this step - patches are already present.
-
-### 4. Pre-Release Validation
-Run linting to catch issues before formal release process:
-```bash
-hatch --env develop run linters
-```
-Stop if any linting errors - fix issues before proceeding.
-
-### 5. Version Management
-Increment to next patch version:
-```bash
-hatch version patch
-git commit -am "Version: $(hatch version)"
-```
-
-### 6. Changelog Generation
-```bash
-hatch --env develop run towncrier build --keep --version $(hatch version)
-git commit -am "Update changelog for v$(hatch version) patch release."
-```
-
-### 7. Quality Assurance Phase
-Push branch and monitor QA workflow:
-```bash
-git push origin release-$ARGUMENTS
-```
-
-Workflow monitoring requirements:
-After pushing, you MUST ensure you monitor the correct QA workflow run:
-
-1. **Wait for workflow trigger**: Wait 10 seconds after pushing to allow GitHub to trigger the workflow
-2. **Verify correct workflow**: Use `gh run list --workflow=qa --limit=5` to list recent runs
-3. **Check timestamps**: Compare the workflow creation time with your push time using `date --utc`
-4. **Ensure fresh run**: Only monitor a workflow run that was created AFTER your push timestamp
-5. **If no new run appears**: Wait additional time and check again - do NOT assume an old completed run is your workflow
-
-Once you've identified the correct QA run ID:
-```bash
-gh run watch --interval 30 --compact
-```
-
-Do not proceed until workflow completes:
-- Monitor QA workflow with `gh run watch` using the correct run ID
-- Use `timeout: 300000` (5 minutes) parameter in Bash tool for monitoring commands
-- If command times out, immediately rerun `gh run watch` until completion
-- Only proceed to next step after seeing "✓ [workflow-name] completed with 'success'"
-- Stop if any jobs fail - consult user before proceeding
-
-### 8. Release Deployment
-**Verify QA passed before proceeding to release tag:**
-```bash
-git tag -m "Release v$(hatch version) patch: ." v$(hatch version)
-git push --tags
-```
-
-Release workflow monitoring requirements:
-After pushing the tag, you MUST ensure you monitor the correct release workflow run:
-
-1. **Wait for workflow trigger**: Wait 10 seconds after pushing tags to allow GitHub to trigger the release workflow
-2. **Verify correct workflow**: Use `gh run list --workflow=release --limit=5` to list recent runs
-3. **Check timestamps**: Compare the workflow creation time with your tag push time using `date --utc`
-4. **Ensure fresh run**: Only monitor a workflow run that was created AFTER your tag push timestamp
-5. **If no new run appears**: Wait additional time and check again - do NOT assume an old completed run is your workflow
-
-Once you've identified the correct release run ID:
-```bash
-gh run watch --interval 30 --compact
-```
-
-Do not proceed until workflow completes:
-- Monitor release workflow with `gh run watch` using the correct run ID
-- Use `timeout: 600000` (10 minutes) parameter in Bash tool for monitoring commands
-- If command times out, immediately rerun `gh run watch` until completion
-- Only proceed to next step after seeing "✓ [workflow-name] completed with 'success'"
-- Stop if any jobs fail - consult user before proceeding
-
-### 9. Post-Release Cleanup
-```bash
-git rm .auxiliary/data/towncrier/*.rst
-git commit -m "Clean up news fragments."
-git push origin release-$ARGUMENTS
-```
-
-### 10. Master Branch Integration
-Cherry-pick commits back to master based on patch development location:
-
-**If patches were developed on master**: Cherry-pick changelog and cleanup commits:
-```bash
-git checkout master && git pull origin master
-git cherry-pick
-git cherry-pick
-git push origin master
-```
-
-**If patches were developed on release branch**: Cherry-pick patch, changelog, and cleanup commits:
-```bash
-git checkout master && git pull origin master
-git cherry-pick
-git cherry-pick
-# Repeat for all patch commits
-git cherry-pick
-git cherry-pick
-git push origin master
-```
-
-**Note**: Use `git log --oneline` to identify commit hashes for cherry-picking.
diff --git a/.auxiliary/configuration/claude/commands/cs-update-command.md b/.auxiliary/configuration/claude/commands/cs-update-command.md
deleted file mode 100644
index 2c50f89..0000000
--- a/.auxiliary/configuration/claude/commands/cs-update-command.md
+++ /dev/null
@@ -1,96 +0,0 @@
----
-allowed-tools: [Read, Write, Edit, MultiEdit, LS, Glob, Grep]
-description: Update existing slash command with missing instructions or reinforced guidance
----
-
-# Update Slash Process
-
-Update an existing custom slash command to address missing instructions,
-reinforce guidance which LLMs are ignoring, add missing tool permissions, or
-make structural improvements.
-
-Target command and instructions: $ARGUMENTS
-
-Stop and consult if:
-- The target file doesn't exist or isn't a slash command
-- Major structural changes are requested that would fundamentally alter the command purpose
-- Changes conflict with established project patterns
-
-## Context
-
-- Command template: @.auxiliary/configuration/claude/miscellany/command-template.md
-- Project conventions: @.auxiliary/configuration/conventions.md
-
-## Prerequisites
-
-Before updating the command, ensure:
-- Clear understanding of what improvements are needed
-- Target file exists and is accessible
-- Any referenced files or patterns are available
-- Changes align with project conventions and existing process patterns
-
-## Process Summary
-
-Key functional areas:
-1. **Analysis**: Read current command and identify improvement areas
-2. **Content Updates**: Add missing instructions or reinforce existing guidance
-3. **Structure Review**: Consider organizational improvements when appropriate
-4. **Tone Refinement**: Ensure professional language without excessive emphasis
-5. **Validation**: Verify updates maintain command effectiveness
-
-## Safety Requirements
-
-Stop and consult the user if:
-- Process changes would break existing workflows or dependencies
-- Updates conflict with established project conventions
-- Structural modifications require significant rework of command logic
-
-## Execution
-
-Execute the following steps:
-
-### 1. Command Analysis
-Read and analyze the current command:
-- Review existing content, structure, and tool permissions
-- Identify areas needing improvement or reinforcement
-- Assess tone and language for professional standards
-- Note any missing instructions or unclear guidance
-
-### 2. Content Enhancement
-Apply requested improvements:
-- Add missing instructions where gaps are identified
-- Reinforce guidance that needs stronger emphasis
-- Remove excessive bold formatting or shouty language
-- Eliminate redundant repetition within sections
-- Ensure clear, actionable language throughout
-
-### 3. Structural Review
-Consider organizational improvements:
-- Evaluate section ordering and logical flow
-- Improve prerequisites or context sections if needed
-- Enhance command summary for clarity
-- Adjust safety requirements as appropriate
-- Ensure consistent formatting patterns
-
-### 4. Tool and Permission Updates
-Review and adjust technical aspects:
-- Verify allowed-tools are appropriate for updated functionality
-- Check that @-references and !-expansions are current
-- Ensure any `!` context commands have proper tool permissions to run (e.g., `Bash(ls:*)` for `ls` commands)
-- Ensure context section provides relevant dynamic information
-- Validate that command can execute with given permissions
-
-### 5. Professional Polish
-Apply formatting and tone standards:
-- Use professional headers without excessive emphasis
-- Maintain clear, direct language without redundancy
-- Ensure consistency with project conventions
-- Remove any attention-grabbing formatting that isn't necessary
-- Balance guidance strength with readability
-
-### 6. Validation and Summary
-Complete the update command:
-- Review updated content for completeness and clarity
-- Verify all requested improvements have been addressed
-- Ensure command maintains effectiveness while addressing issues
-- Provide succinct summary of changes made to the user
diff --git a/.auxiliary/configuration/claude/commands/cs-update-readme-rst.md b/.auxiliary/configuration/claude/commands/cs-update-readme-rst.md
deleted file mode 100644
index 69da725..0000000
--- a/.auxiliary/configuration/claude/commands/cs-update-readme-rst.md
+++ /dev/null
@@ -1,103 +0,0 @@
----
-allowed-tools: [Read, Edit, MultiEdit, LS, Glob, Grep, Bash(hatch --env develop run:*), Bash(git status), Bash(ls:*), Bash(find:*), WebFetch]
-description: Analyze current project state and refresh manually-maintained sections of README.rst while preserving template content
----
-
-# Update README Documentation
-
-Analyze the current project state and refresh the manually-maintained sections
-of README.rst files while preserving auto-generated template content and
-ensuring accuracy with actual project capabilities.
-
-User input: $ARGUMENTS
-
-## Context
-
-- Current git status: !`git status --porcelain`
-- Project structure: !`ls -la`
-- Current README: @README.rst
-- Project metadata: @pyproject.toml
-- Product requirements: @documentation/prd.rst
-- Architecture overview: @documentation/architecture/filesystem.rst
-
-## Prerequisites
-
-Before updating README documentation, ensure:
-- Current README.rst exists and is accessible
-- Understanding of project's actual capabilities and features
-- Access to project metadata and configuration files
-
-## Process Summary
-
-Key functional areas:
-1. **Content Analysis**: Examine current README and identify TODO sections needing updates
-2. **Project Assessment**: Analyze actual capabilities from code, CLI, and configuration
-3. **Content Generation**: Create compelling descriptions, features, and examples based on real functionality
-4. **Validation**: Ensure all claims and examples match actual project capabilities
-
-## Safety Requirements
-
-Stop and consult the user if:
-- README.rst cannot be read or is missing critical structure
-- Template boundaries are unclear or may be damaged
-- Project capabilities cannot be determined from available sources
-- Generated examples cannot be validated against actual implementation
-- Significant structural changes to README are required beyond content updates
-
-All template-rendered sections must be preserved without modification; these
-include: badges, installation, contribution, flair
-
-
-## Execution
-
-Execute the following steps:
-
-### 1. README Analysis
-Read and analyze the current README structure:
-- Examine existing README.rst for TODO markers and outdated content
-- Identify template-generated sections that must be preserved
-- Map sections that need manual content updates
-- Note existing manual content that should be retained
-
-### 2. Project Capability Assessment
-Analyze the actual project functionality:
-- Extract project metadata from pyproject.toml (name, description, dependencies)
-- Read PRD document if available for project goals and features
-- Examine source code structure to understand API capabilities
-- Test CLI functionality if enabled to document actual usage patterns
-- Review configuration files and scripts for additional capabilities
-
-### 3. Content Generation Strategy
-Plan content updates based on project analysis:
-- Draft compelling project description replacing TODO placeholders
-- Identify key features based on actual implementation
-- Plan realistic examples demonstrating current functionality
-- Consider additional sections (Use Cases, Motivation, Configuration) appropriate for project complexity
-- Ensure content accuracy and professional tone
-
-### 4. README Content Updates
-Update manual sections while preserving template content:
-- Replace ".. todo:: Provide project description" with accurate description
-- Add or update "Key Features ⭐" section with bullet points of actual capabilities
-- Generate "Examples 💡" section with working CLI/API usage examples
-- Add relevant sections like "Use Cases", "Motivation", or "Configuration" as appropriate
-- Preserve all template-generated sections (badges, installation, contribution, flair)
-
-### 5. Content Validation
-Verify accuracy of all updated content:
-- Test all code examples for correctness with current codebase
-- Verify feature claims are supported by actual implementation
-- Check that installation instructions match project configuration
-- Ensure RST formatting is correct and consistent
-- Validate that README length is appropriate for project complexity
-
-### 6. Final Review
-Complete final validation and formatting:
-- Review entire README for consistency and professional presentation
-- Ensure all TODO markers have been appropriately addressed
-- Verify template boundaries are intact and respected
-- Confirm examples are executable and accurate
-- Check that content maintains engaging tone while being factually correct
-
-### 7. Summarize Updates
-Provide concise summary of updates to the user.
diff --git a/.auxiliary/configuration/claude/commands/validate-custom-slash.md b/.auxiliary/configuration/claude/commands/validate-custom-slash.md
deleted file mode 100644
index b6bffae..0000000
--- a/.auxiliary/configuration/claude/commands/validate-custom-slash.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-allowed-tools: Bash(git status), Bash(git branch:*), Bash(git log:*), Bash(hatch version:*), Bash(echo:*), Bash(ls:*), Bash(pwd), LS, Read
-description: Validate custom slash command functionality with context and permissions
----
-
-# Validate Custom Slash Command
-
-Test script to validate custom slash command functionality, permissions, and context interpolation.
-
-Test argument: `$ARGUMENTS`
-
-## Context
-
-- Current directory: !`pwd`
-- Current git status: !`git status --porcelain`
-- Current branch: !`git branch --show-current`
-- Current version: !`hatch version`
-- Recent commits: !`git log --oneline -5`
-- Template files: !`ls template/.auxiliary/configuration/claude/commands/`
-
-## Validation Tasks
-
-1. **Report the test argument**: Look at the "Test argument:" line above and tell me what value you see there
-2. **Test basic git commands**: Run `git status` and `git branch --show-current`
-3. **Test hatch command**: Run `hatch version`
-4. **Test file operations**: Use LS tool to list current directory contents
-5. **Test restricted command**: Attempt `git push` (should be blocked and require approval)
-
-## Expected Results
-
-- Context should be populated with current state
-- Allowed commands should execute successfully
-- `git push` should be blocked
-
-## Your Task
-
-Execute the validation tasks above and provide a summary report including:
-- The interpolated argument value you see on the "Test argument:" line
-- Results of each allowed command
-- Confirmation that restricted commands are properly blocked
-- Any observations about the command execution experience
diff --git a/.auxiliary/configuration/claude/.gitignore b/.auxiliary/configuration/coders/claude/.gitignore
similarity index 100%
rename from .auxiliary/configuration/claude/.gitignore
rename to .auxiliary/configuration/coders/claude/.gitignore
diff --git a/.auxiliary/configuration/coders/claude/agents/.gitignore b/.auxiliary/configuration/coders/claude/agents/.gitignore
new file mode 100644
index 0000000..c96a04f
--- /dev/null
+++ b/.auxiliary/configuration/coders/claude/agents/.gitignore
@@ -0,0 +1,2 @@
+*
+!.gitignore
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/claude/commands/.gitignore b/.auxiliary/configuration/coders/claude/commands/.gitignore
new file mode 100644
index 0000000..c96a04f
--- /dev/null
+++ b/.auxiliary/configuration/coders/claude/commands/.gitignore
@@ -0,0 +1,2 @@
+*
+!.gitignore
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/claude/miscellany/bash-tool-bypass b/.auxiliary/configuration/coders/claude/miscellany/bash-tool-bypass
new file mode 100755
index 0000000..223ea01
--- /dev/null
+++ b/.auxiliary/configuration/coders/claude/miscellany/bash-tool-bypass
@@ -0,0 +1,52 @@
+#!/usr/bin/env python3
+"""
+Command wrapper for Claude Code web environments.
+
+This script wraps command execution via Python subprocess to bypass
+Bash tool permission restrictions in Claude Code web environments.
+
+Usage:
+ bash-tool-bypass [arguments...]
+
+Examples:
+ bash-tool-bypass gh --version
+ bash-tool-bypass gh pr view 1
+ bash-tool-bypass gh pr list --limit 5
+ bash-tool-bypass gh issue view 42 --json title,state,author
+ bash-tool-bypass gh repo view owner/repo
+ bash-tool-bypass some-other-restricted-command --flag value
+
+Notes:
+ - This wrapper is designed to bypass specific command restrictions in
+ Claude Code
+ - Common use case is running 'gh' commands when Bash tool blocks them
+ directly
+ - Any command accessible in PATH can be executed through this wrapper
+ - Authentication/permissions still apply to the wrapped command itself
+"""
+
+import subprocess
+import sys
+
+# Minimum required argument count (script name + command)
+MIN_ARGS = 2
+
+
+def main():
+ """Execute command via subprocess and exit with its return code."""
+ if len(sys.argv) < MIN_ARGS:
+ print(__doc__)
+ sys.exit(1)
+
+ # Build command with all arguments
+ cmd = sys.argv[1:]
+
+ # Execute command (intentionally passes through untrusted input)
+ result = subprocess.run(cmd, check=False) # noqa: S603
+
+ # Exit with command's return code
+ sys.exit(result.returncode)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/.auxiliary/configuration/claude/miscellany/command-template.md b/.auxiliary/configuration/coders/claude/miscellany/command-template.md
similarity index 95%
rename from .auxiliary/configuration/claude/miscellany/command-template.md
rename to .auxiliary/configuration/coders/claude/miscellany/command-template.md
index 29acd2f..2db83c6 100644
--- a/.auxiliary/configuration/claude/miscellany/command-template.md
+++ b/.auxiliary/configuration/coders/claude/miscellany/command-template.md
@@ -1,5 +1,5 @@
---
-allowed-tools: [Tool1, Tool2, Tool3]
+allowed-tools: Tool1, Tool2, Tool3
description: Brief description of what this command does
---
diff --git a/.auxiliary/scripts/claude/post-edit-linter b/.auxiliary/configuration/coders/claude/scripts/post-edit-linter
similarity index 72%
rename from .auxiliary/scripts/claude/post-edit-linter
rename to .auxiliary/configuration/coders/claude/scripts/post-edit-linter
index 2237628..78b38d6 100755
--- a/.auxiliary/scripts/claude/post-edit-linter
+++ b/.auxiliary/configuration/coders/claude/scripts/post-edit-linter
@@ -14,6 +14,10 @@ import sys
def main( ):
# event = _acquire_event_data( )
+ if not _is_command_available( 'hatch' ):
+ raise SystemExit( 0 )
+ if not _is_hatch_env_available( 'develop' ):
+ raise SystemExit( 0 )
try:
result = subprocess.run(
[ 'hatch', '--env', 'develop', 'run', 'linters' ], # noqa: S607
@@ -47,7 +51,7 @@ def _acquire_event_data( ):
def _emit_decision_json( decision, reason ):
- ''' Output JSON decision for Claude Code hook system. '''
+ ''' Outputs JSON decision for Claude Code hook system. '''
response = { "decision": decision, "reason": reason }
print( json.dumps( response ) )
raise SystemExit( 2 )
@@ -58,6 +62,27 @@ def _error( message ):
raise SystemExit( 2 )
+def _is_command_available( command ):
+ ''' Checks if a command is available in PATH. '''
+ try:
+ result = subprocess.run( # noqa: S603
+ [ 'which', command ], # noqa: S607
+ capture_output = True, check = False, text = True, timeout = 5 )
+ except Exception: return False
+ return result.returncode == 0
+
+
+def _is_hatch_env_available( env_name ):
+ ''' Checks if a specific Hatch environment exists. '''
+ try:
+ result = subprocess.run(
+ [ 'hatch', 'env', 'show' ], # noqa: S607
+ capture_output = True, check = False, text = True, timeout = 10 )
+ except Exception: return False
+ if result.returncode != 0: return False
+ return env_name in result.stdout
+
+
def _reactor_failure( message ):
print( "Claude Code Hook Failure: {message}", file = sys.stderr )
raise SystemExit( 1 )
diff --git a/.auxiliary/configuration/coders/claude/scripts/pre-bash-git-commit-check b/.auxiliary/configuration/coders/claude/scripts/pre-bash-git-commit-check
new file mode 100755
index 0000000..5ccb7e2
--- /dev/null
+++ b/.auxiliary/configuration/coders/claude/scripts/pre-bash-git-commit-check
@@ -0,0 +1,123 @@
+#!/usr/bin/env python3
+# vim: set filetype=python fileencoding=utf-8:
+# -*- coding: utf-8 -*-
+
+''' Claude Code hook to prevent git commits when linters or tests fail. '''
+
+
+import json
+import shlex
+import subprocess
+import sys
+
+
+_GIT_COMMIT_MIN_TOKENS = 2
+
+
+def main( ):
+ event = _acquire_event_data( )
+ command_line = _extract_command( event )
+ commands = _partition_command_line( command_line )
+ for command in commands:
+ _check_git_commit_command( command )
+ raise SystemExit( 0 )
+
+
+def _acquire_event_data( ):
+ try: return json.load( sys.stdin )
+ except json.JSONDecodeError:
+ _reactor_failure( "Invalid event data." )
+
+
+def _check_git_commit_command( tokens ):
+ ''' Checks for git commit commands and validates linters/tests. '''
+ if not _is_git_commit_command( tokens ): return
+ try:
+ result = subprocess.run(
+ [ 'hatch', '--env', 'develop', 'run', 'linters' ], # noqa: S607
+ capture_output = True, text = True, timeout = 120, check = False )
+ except (
+ subprocess.TimeoutExpired,
+ subprocess.CalledProcessError,
+ FileNotFoundError
+ ): _error_with_divine_message( )
+ else:
+ if result.returncode != 0: _error_with_divine_message( )
+ try:
+ result = subprocess.run(
+ [ 'hatch', '--env', 'develop', 'run', 'testers' ], # noqa: S607
+ capture_output = True, text = True, timeout = 300, check = False )
+ except (
+ subprocess.TimeoutExpired,
+ subprocess.CalledProcessError,
+ FileNotFoundError
+ ): _error_with_divine_message( )
+ else:
+ if result.returncode != 0: _error_with_divine_message( )
+ try:
+ result = subprocess.run(
+ [ 'hatch', '--env', 'develop', 'run', 'vulture' ], # noqa: S607
+ capture_output = True, text = True, timeout = 120, check = False )
+ except (
+ subprocess.TimeoutExpired,
+ subprocess.CalledProcessError,
+ FileNotFoundError
+ ): _error_with_divine_message( )
+ else:
+ if result.returncode != 0: _error_with_divine_message( )
+
+
+def _error_with_divine_message( ):
+ ''' Displays divine admonition and exits. '''
+ message = (
+ "The Large Language Divinity 🌩️🤖🌩️ in the Celestial Data Center hath "
+ "commanded that:\n"
+ "* Thy code shalt pass all lints before thy commit.\n"
+ " Run: hatch --env develop run linters\n"
+ " Run: hatch --env develop run vulture\n"
+ "* Thy code shalt pass all tests before thy commit.\n"
+ " Run: hatch --env develop run testers\n\n"
+ "(If you are in the middle of a large refactor, consider commenting "
+ "out the tests and adding a reminder note in the .auxiliary/notes "
+ "directory.)"
+ )
+ print( message, file = sys.stderr )
+ raise SystemExit( 2 )
+
+
+def _extract_command( event_data ):
+ ''' Extracts command from event data, exit if not Bash tool. '''
+ tool_name = event_data.get( 'tool_name', '' )
+ if tool_name != 'Bash': raise SystemExit( 0 )
+ tool_input = event_data.get( 'tool_input', { } )
+ return tool_input.get( 'command', '' )
+
+
+def _is_git_commit_command( tokens ):
+ ''' Checks if tokens represent a git commit command. '''
+ if len( tokens ) < _GIT_COMMIT_MIN_TOKENS:
+ return False
+ return tokens[ 0 ] == 'git' and tokens[ 1 ] == 'commit'
+
+
+_splitters = frozenset( ( ';', '&', '|', '&&', '||' ) )
+def _partition_command_line( command_line ):
+ tokens = shlex.split( command_line )
+ commands = [ ]
+ command_tokens = [ ]
+ for token in tokens:
+ if token in _splitters:
+ commands.append( command_tokens )
+ command_tokens = [ ]
+ continue
+ command_tokens.append( token )
+ if command_tokens: commands.append( command_tokens )
+ return commands
+
+
+def _reactor_failure( message ):
+ print( f"Claude Code Hook Failure: {message}", file = sys.stderr )
+ raise SystemExit( 1 )
+
+
+if __name__ == '__main__': main()
diff --git a/.auxiliary/scripts/claude/pre-bash-python-check b/.auxiliary/configuration/coders/claude/scripts/pre-bash-python-check
similarity index 100%
rename from .auxiliary/scripts/claude/pre-bash-python-check
rename to .auxiliary/configuration/coders/claude/scripts/pre-bash-python-check
diff --git a/.auxiliary/configuration/claude/settings.json b/.auxiliary/configuration/coders/claude/settings.json
similarity index 71%
rename from .auxiliary/configuration/claude/settings.json
rename to .auxiliary/configuration/coders/claude/settings.json
index 2b81e86..6019dc8 100644
--- a/.auxiliary/configuration/claude/settings.json
+++ b/.auxiliary/configuration/coders/claude/settings.json
@@ -1,5 +1,7 @@
{
"env": {
+ "BASH_DEFAULT_TIMEOUT_MS": 1800000,
+ "BASH_MAX_TIMEOUT_MS": 1800000,
"CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR": 1,
"CLAUDE_CODE_DISABLE_TERMINAL_TITLE": 1,
"DISABLE_NON_ESSENTIAL_MODEL_CALLS": 1
@@ -11,19 +13,24 @@
"hooks": [
{
"type": "command",
- "command": ".auxiliary/scripts/claude/pre-bash-python-check",
+ "command": ".claude/scripts/pre-bash-python-check",
"timeout": 10
+ },
+ {
+ "type": "command",
+ "command": ".claude/scripts/pre-bash-git-commit-check",
+ "timeout": 300
}
]
}
],
"PostToolUse": [
{
- "matcher": "Edit|MultiEdit|Write|mcp__text-editor__edit_text_file_contents",
+ "matcher": "Edit|MultiEdit|Write",
"hooks": [
{
"type": "command",
- "command": ".auxiliary/scripts/claude/post-edit-linter",
+ "command": ".claude/scripts/post-edit-linter",
"timeout": 60
}
]
@@ -32,6 +39,18 @@
},
"permissions": {
"auto_allow": [
+ "mcp__context7__get-library-docs",
+ "mcp__context7__resolve-library-id",
+ "mcp__librovore__query_content",
+ "mcp__librovore__query_inventory",
+ "mcp__pyright__definition",
+ "mcp__pyright__diagnostics",
+ "mcp__pyright__edit_file",
+ "mcp__pyright__hover",
+ "mcp__pyright__references",
+ "mcp__pyright__rename_symbol",
+ "Bash(hatch run *)",
+ "Bash(hatch --env develop run *)",
"Bash(awk *)",
"Bash(cat *)",
"Bash(cut *)",
@@ -61,10 +80,6 @@
"Bash(git show *)",
"Bash(git status)",
"Bash(grep *)",
- "Bash(hatch run python *)",
- "Bash(hatch --env develop run docsgen)",
- "Bash(hatch --env develop run linters)",
- "Bash(hatch --env develop run testers)",
"Bash(head *)",
"Bash(ls *)",
"Bash(ps *)",
@@ -75,18 +90,11 @@
"Bash(tail *)",
"Bash(uniq *)",
"Bash(wc *)",
- "Bash(which *)",
- "mcp__context7__get-library-docs",
- "mcp__context7__resolve-library-id",
- "mcp__pyright__definition",
- "mcp__pyright__diagnostics",
- "mcp__pyright__hover",
- "mcp__pyright__references",
- "mcp__ruff__definition",
- "mcp__ruff__diagnostics",
- "mcp__ruff__hover",
- "mcp__ruff__references",
- "mcp__text-editor__get_text_file_contents"
+ "Bash(which *)"
]
+ },
+ "sandbox": {
+ "enabled": false,
+ "autoAllowBashIfSandboxed": true
}
}
diff --git a/.auxiliary/configuration/coders/gemini/commands/.gitignore b/.auxiliary/configuration/coders/gemini/commands/.gitignore
new file mode 100644
index 0000000..c96a04f
--- /dev/null
+++ b/.auxiliary/configuration/coders/gemini/commands/.gitignore
@@ -0,0 +1,2 @@
+*
+!.gitignore
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/gemini/miscellany/command-template.md b/.auxiliary/configuration/coders/gemini/miscellany/command-template.md
new file mode 100644
index 0000000..4cc453b
--- /dev/null
+++ b/.auxiliary/configuration/coders/gemini/miscellany/command-template.md
@@ -0,0 +1,43 @@
+
+# Process Title
+
+Brief introductory paragraph explaining the purpose.
+
+Target/input description: {{args}}
+
+## Context
+
+- Current state checks, if applicable: !{command1}
+- Environment info, if applicable: !{command2}
+- Relevant data, if applicable: !{command3}
+
+## Prerequisites
+
+Before running this process, ensure:
+- Prerequisite 1
+- Prerequisite 2
+- @-references to relevant guides if applicable
+
+## Process Summary
+
+Key functional areas:
+1. **Phase 1**: Description
+2. **Phase 2**: Description
+3. **Phase 3**: Description
+
+## Safety Requirements
+
+Stop and consult the user if:
+- List validation conditions
+- Error conditions that require user input
+- Unexpected situations
+
+## Execution
+
+Execute the following steps:
+
+### 1. Step Name
+Description of what this step does.
+
+### 2. Step Name
+More steps as needed.
diff --git a/.auxiliary/configuration/coders/gemini/settings.json b/.auxiliary/configuration/coders/gemini/settings.json
new file mode 100644
index 0000000..30a220b
--- /dev/null
+++ b/.auxiliary/configuration/coders/gemini/settings.json
@@ -0,0 +1,117 @@
+{
+ "ui": {
+ "showLineNumbers": true
+ },
+ "tools": {
+ "autoAccept": true,
+ "core": [
+ "mcp__context7__resolve-library-id",
+ "mcp__context7__get-library-docs",
+ "mcp__librovore__query_content",
+ "mcp__librovore__query_inventory",
+ "mcp__pyright__definition",
+ "mcp__pyright__diagnostics",
+ "mcp__pyright__hover",
+ "mcp__pyright__references",
+ "mcp__pyright__rename_symbol",
+ "edit",
+ "glob",
+ "google_web_search",
+ "list_directory",
+ "read_file",
+ "replace",
+ "run_shell_command",
+ "save_memory",
+ "search_file_content",
+ "web_fetch",
+ "write_file",
+ "write_todos"
+ ],
+ "allowed": [
+ "mcp__context7__resolve-library-id",
+ "mcp__context7__get-library-docs",
+ "mcp__librovore__query_content",
+ "mcp__librovore__query_inventory",
+ "mcp__pyright__definition",
+ "mcp__pyright__diagnostics",
+ "mcp__pyright__hover",
+ "mcp__pyright__references",
+ "mcp__pyright__rename_symbol",
+ "edit",
+ "glob",
+ "google_web_search",
+ "list_directory",
+ "read_file",
+ "replace",
+ "run_shell_command(hatch run)",
+ "run_shell_command(hatch --env develop run)",
+ "run_shell_command(awk)",
+ "run_shell_command(cat)",
+ "run_shell_command(cut)",
+ "run_shell_command(df)",
+ "run_shell_command(du)",
+ "run_shell_command(echo)",
+ "run_shell_command(file)",
+ "run_shell_command(find)",
+ "run_shell_command(gh browse)",
+ "run_shell_command(gh issue list)",
+ "run_shell_command(gh issue view)",
+ "run_shell_command(gh pr checks)",
+ "run_shell_command(gh pr list)",
+ "run_shell_command(gh pr view)",
+ "run_shell_command(gh release list)",
+ "run_shell_command(gh release view)",
+ "run_shell_command(gh repo list)",
+ "run_shell_command(gh repo view)",
+ "run_shell_command(gh run list)",
+ "run_shell_command(gh run view)",
+ "run_shell_command(gh run watch)",
+ "run_shell_command(gh status)",
+ "run_shell_command(git add)",
+ "run_shell_command(git diff)",
+ "run_shell_command(git log)",
+ "run_shell_command(git show)",
+ "run_shell_command(git status)",
+ "run_shell_command(grep)",
+ "run_shell_command(head)",
+ "run_shell_command(ls)",
+ "run_shell_command(ps)",
+ "run_shell_command(pwd)",
+ "run_shell_command(rg)",
+ "run_shell_command(sed)",
+ "run_shell_command(sort)",
+ "run_shell_command(tail)",
+ "run_shell_command(uniq)",
+ "run_shell_command(wc)",
+ "run_shell_command(which)",
+ "save_memory",
+ "search_file_content",
+ "web_fetch",
+ "write_file",
+ "write_todos"
+ ]
+ },
+ "general": {
+ "checkpointing": {
+ "enabled": true
+ }
+ },
+ "mcpServers": {
+ "pyright": {
+ "command": "mcp-language-server",
+ "args": [
+ "--lsp", "pyright-langserver", "--workspace", ".",
+ "--", "--stdio"
+ ],
+ "excludeTools": [ "edit_file" ]
+ },
+ "context7": {
+ "command": "npx",
+ "args": [ "-y", "@upstash/context7-mcp" ]
+ },
+ "librovore": {
+ "command": "uvx",
+ "args": [ "librovore", "serve" ]
+ }
+ }
+}
diff --git a/.auxiliary/configuration/coders/opencode/agent/.gitignore b/.auxiliary/configuration/coders/opencode/agent/.gitignore
new file mode 100644
index 0000000..d6b7ef3
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/agent/.gitignore
@@ -0,0 +1,2 @@
+*
+!.gitignore
diff --git a/.auxiliary/configuration/coders/opencode/command/.gitignore b/.auxiliary/configuration/coders/opencode/command/.gitignore
new file mode 100644
index 0000000..c96a04f
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/command/.gitignore
@@ -0,0 +1,2 @@
+*
+!.gitignore
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/opencode/plugin/.gitignore b/.auxiliary/configuration/coders/opencode/plugin/.gitignore
new file mode 100644
index 0000000..5ce0d1a
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/plugin/.gitignore
@@ -0,0 +1,13 @@
+# Node.js dependencies
+node_modules/
+npm-debug.log*
+yarn-debug.log*
+yarn-error.log*
+
+# TypeScript build outputs
+dist/
+*.tsbuildinfo
+
+# Bun
+.bun.lockb
+.bun-debug.log
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/opencode/plugin/README.md b/.auxiliary/configuration/coders/opencode/plugin/README.md
new file mode 100644
index 0000000..e47f8ed
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/plugin/README.md
@@ -0,0 +1,109 @@
+# Opencode Plugins for Quality Assurance
+
+This directory contains Opencode plugins that provide quality assurance and development workflow enforcement, ported from Claude Code hooks.
+
+## Plugins
+
+### ✅ 1. `post-edit-linter.js` (WORKING)
+**Purpose**: Runs linters after file updates
+**Event**: `tool.execute.after` (for `edit` tool)
+**Behavior**:
+- Checks if `hatch` command is available
+- Checks if `develop` Hatch environment exists
+- Runs `hatch --env develop run linters`
+- Throws error with truncated output (50 lines max) if linters fail
+- Early exit if conditions not met (hatch not available)
+- **Note**: Uses `tool.execute.after` not `file.edited` (LLM-initiated edits don't trigger `file.edited`)
+
+### ⚠️ 2. `git-commit-guard.js-disabled` (DISABLED - Opencode bash tool limitation)
+**Purpose**: Would prevent git commits when linters or tests fail
+**Status**: **DISABLED** - Opencode's bash tool doesn't pass command in `input.args.command`
+**Issue**: Plugin intercepts `tool.execute.before` but `input.args` is empty for bash tool
+**Original intent**: Port of Claude Code hook `pre-bash-git-commit-check`
+
+### ⚠️ 3. `python-environment-guard.js-disabled` (DISABLED - Opencode bash tool limitation)
+**Purpose**: Would detect improper Python usage in Bash commands
+**Status**: **DISABLED** - Opencode's bash tool doesn't pass command in `input.args.command`
+**Issue**: Plugin intercepts `tool.execute.before` but `input.args` is empty for bash tool
+**Original intent**: Port of Claude Code hook `pre-bash-python-check`
+
+## Installation for Downstream Projects
+
+When this template is copied to a downstream project:
+
+1. **Navigate to the plugin directory**:
+ ```bash
+ cd .auxiliary/configuration/coders/opencode/plugin
+ ```
+
+2. **Install dependencies**:
+ ```bash
+ npm install
+ ```
+
+3. **Ensure symlink exists**:
+ ```bash
+ # From project root
+ ln -sf .auxiliary/configuration/coders/opencode .opencode
+ ```
+
+4. **Verify plugin loading**:
+ Opencode should automatically load plugins from `.opencode/plugin/`
+
+## Dependencies
+
+- `shlex`: Shell command parsing (port of Python's shlex module) - used in disabled plugins
+- `bun`: Runtime (provided by Opencode)
+
+## Porting Notes
+
+These plugins are ports of Claude Code hooks with varying success:
+
+| Claude Code Hook | Opencode Plugin | Status | Key Changes |
+|-----------------|----------------|--------|-------------|
+| `post-edit-linter` | `post-edit-linter.js` | ✅ **WORKING** | Python → JavaScript, `subprocess` → Bun shell API, uses `tool.execute.after` not `file.edited` |
+| `pre-bash-git-commit-check` | `git-commit-guard.js-disabled` | ⚠️ **DISABLED** | Tool name: `Bash` → `bash`, uses npm `shlex` package. **Issue**: Opencode bash tool doesn't pass command in `input.args.command` |
+| `pre-bash-python-check` | `python-environment-guard.js-disabled` | ⚠️ **DISABLED** | Same parsing logic with `shlex`, exact error messages. **Issue**: Opencode bash tool doesn't pass command in `input.args.command` |
+
+## Critical Discovery
+
+**Opencode's bash tool limitation**: During testing, we discovered that Opencode's bash tool doesn't pass the command string in `input.args.command` (or any `input.args` field). The `input.args` object is empty `{}` when the bash tool is invoked. This prevents plugins from intercepting and analyzing bash commands.
+
+**Working solution**: Only `post-edit-linter.js` works because it uses `tool.execute.after` for the `edit` tool, where file information is available in `output.metadata.filediff.file`.
+
+## Error Messages
+
+All error messages match the original Claude Code hooks exactly, including:
+- Linter output truncation to 50 lines
+- "Divine admonition" for git commit blocking
+- Warning messages for Python usage
+
+## Testing
+
+To test the plugins:
+
+1. **File edit test**: Edit a Python file and verify linters run
+2. **Git commit test**: Try `git commit -m "test"` and verify checks run
+3. **Python usage test**: Try `python -c "print('test')"` and verify warning
+
+## Troubleshooting
+
+**Plugins not loading**:
+- Verify `.opencode` symlink points to `.auxiliary/configuration/coders/opencode`
+- Check Opencode version supports plugin API
+- Ensure dependencies are installed (`npm install`)
+
+**Command not found errors**:
+- Verify `hatch` is installed and in PATH
+- Check `develop` Hatch environment exists: `hatch env show`
+
+**Timeout issues**:
+- Timeouts match Python hooks (60s, 120s, 300s)
+- Uses `Promise.race` with `setTimeout` since Bun shell lacks native timeout
+
+## Source Code
+
+Original Claude Code hooks in `template/.auxiliary/configuration/coders/claude/scripts/`:
+- `post-edit-linter`
+- `pre-bash-git-commit-check`
+- `pre-bash-python-check`
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/opencode/plugin/git-commit-guard.js-disabled b/.auxiliary/configuration/coders/opencode/plugin/git-commit-guard.js-disabled
new file mode 100644
index 0000000..bd9e381
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/plugin/git-commit-guard.js-disabled
@@ -0,0 +1,195 @@
+/**
+ * Opencode plugin to prevent git commits when linters or tests fail.
+ * Port of Claude Code hook: template/.auxiliary/configuration/coders/claude/scripts/pre-bash-git-commit-check
+ */
+import { split } from 'shlex';
+
+export const GitCommitGuard = async ({ project, client, $, directory, worktree }) => {
+ const GIT_COMMIT_MIN_TOKENS = 2;
+ const SPLITTERS = new Set([';', '&', '|', '&&', '||']);
+
+ /**
+ * Checks if a command is available in PATH.
+ */
+ async function isCommandAvailable(command) {
+ try {
+ const result = await $`which ${command}`.nothrow().quiet();
+ return result.exitCode === 0;
+ } catch {
+ return false;
+ }
+ }
+
+ /**
+ * Checks if a specific Hatch environment exists.
+ */
+ async function isHatchEnvAvailable(envName) {
+ try {
+ const result = await $`hatch env show`.nothrow().quiet();
+ if (result.exitCode !== 0) return false;
+ return result.stdout.toString().includes(envName);
+ } catch {
+ return false;
+ }
+ }
+
+ /**
+ * Runs a command with timeout using Promise.race.
+ */
+ async function runCommandWithTimeout(command, timeoutMs) {
+ const timeoutPromise = new Promise((_, reject) => {
+ setTimeout(() => reject(new Error(`Command timed out after ${timeoutMs}ms`)), timeoutMs);
+ });
+
+ try {
+ const commandPromise = (async () => {
+ try {
+ const result = await $`sh -c "${command}"`.nothrow().quiet();
+ return {
+ exitCode: result.exitCode,
+ stdout: result.stdout?.toString() || '',
+ stderr: result.stderr?.toString() || ''
+ };
+ } catch (error) {
+ return {
+ exitCode: error.exitCode || 1,
+ stdout: error.stdout?.toString() || '',
+ stderr: error.stderr?.toString() || error.message || ''
+ };
+ }
+ })();
+
+ return await Promise.race([commandPromise, timeoutPromise]);
+ } catch (error) {
+ return {
+ exitCode: 1,
+ stdout: '',
+ stderr: error.message || 'Command execution failed'
+ };
+ }
+ }
+
+ /**
+ * Displays divine admonition and exits.
+ */
+ function errorWithDivineMessage() {
+ const message = (
+ "The Large Language Divinity 🌩️🤖🌩️ in the Celestial Data Center hath " +
+ "commanded that:\n" +
+ "* Thy code shalt pass all lints before thy commit.\n" +
+ " Run: hatch --env develop run linters\n" +
+ " Run: hatch --env develop run vulture\n" +
+ "* Thy code shalt pass all tests before thy commit.\n" +
+ " Run: hatch --env develop run testers\n\n" +
+ "(If you are in the middle of a large refactor, consider commenting " +
+ "out tests and adding a reminder note in the .auxiliary/notes " +
+ "directory.)"
+ );
+ throw new Error(message);
+ }
+
+ /**
+ * Checks if tokens represent a git commit command.
+ */
+ function isGitCommitCommand(tokens) {
+ if (tokens.length < GIT_COMMIT_MIN_TOKENS) {
+ return false;
+ }
+ return tokens[0] === 'git' && tokens[1] === 'commit';
+ }
+
+ /**
+ * Partitions command line into separate commands using shell splitters.
+ */
+ function partitionCommandLine(commandLine) {
+ // Use shlex.split for proper shell parsing (matches Python hook)
+ const tokens = split(commandLine);
+
+ // Now partition by shell splitters
+ const commands = [];
+ let commandTokens = [];
+
+ for (const token of tokens) {
+ if (SPLITTERS.has(token)) {
+ if (commandTokens.length > 0) {
+ commands.push(commandTokens);
+ commandTokens = [];
+ }
+ continue;
+ }
+ commandTokens.push(token);
+ }
+
+ if (commandTokens.length > 0) {
+ commands.push(commandTokens);
+ }
+
+ return commands;
+ }
+
+ /**
+ * Checks for git commit commands and validates linters/tests.
+ */
+ async function checkGitCommitCommand(tokens) {
+ if (!isGitCommitCommand(tokens)) return;
+
+ // Check if hatch command is available
+ if (!(await isCommandAvailable('hatch'))) {
+ return; // Early exit if hatch not available
+ }
+
+ // Check if develop Hatch environment exists
+ if (!(await isHatchEnvAvailable('develop'))) {
+ return; // Early exit if develop environment not available
+ }
+
+ // Run linters with 120 second timeout
+ try {
+ const result = await runCommandWithTimeout('hatch --env develop run linters', 120000);
+ if (result.exitCode !== 0) {
+ errorWithDivineMessage();
+ }
+ } catch {
+ errorWithDivineMessage();
+ }
+
+ // Run tests with 300 second timeout
+ try {
+ const result = await runCommandWithTimeout('hatch --env develop run testers', 300000);
+ if (result.exitCode !== 0) {
+ errorWithDivineMessage();
+ }
+ } catch {
+ errorWithDivineMessage();
+ }
+
+ // Run vulture with 120 second timeout
+ try {
+ const result = await runCommandWithTimeout('hatch --env develop run vulture', 120000);
+ if (result.exitCode !== 0) {
+ errorWithDivineMessage();
+ }
+ } catch {
+ errorWithDivineMessage();
+ }
+ }
+
+ return {
+ "tool.execute.before": async (input, output) => {
+ // Only run for bash tool
+ if (input.tool !== "bash") return;
+
+ // Extract command from input
+ const command = input.args?.command || '';
+ if (!command) return;
+
+ // Partition command line into separate commands
+ const commands = partitionCommandLine(command);
+
+ // Check each command for git commit
+ for (const commandTokens of commands) {
+ await checkGitCommitCommand(commandTokens);
+ }
+ }
+ };
+};
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/opencode/plugin/package.json b/.auxiliary/configuration/coders/opencode/plugin/package.json
new file mode 100644
index 0000000..6909e9d
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/plugin/package.json
@@ -0,0 +1,13 @@
+{
+ "name": "opencode-plugins",
+ "version": "1.0.0",
+ "type": "module",
+ "dependencies": {
+ "@opencode-ai/plugin": "^1.0.134",
+ "shlex": "^2.1.2"
+ },
+ "devDependencies": {
+ "@types/node": "^22.0.0",
+ "typescript": "^5.0.0"
+ }
+}
diff --git a/.auxiliary/configuration/coders/opencode/plugin/post-edit-linter.js b/.auxiliary/configuration/coders/opencode/plugin/post-edit-linter.js
new file mode 100644
index 0000000..d659d99
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/plugin/post-edit-linter.js
@@ -0,0 +1,130 @@
+/**
+ * CORRECT Opencode plugin to run linters after file edits.
+ * Port of Claude Code hook: template/.auxiliary/configuration/coders/claude/scripts/post-edit-linter
+ */
+export const PostEditLinterCorrect = async ({ project, client, $, directory, worktree }) => {
+ /**
+ * Checks if a command is available in PATH.
+ */
+ async function isCommandAvailable(command) {
+ try {
+ const result = await $`which ${command}`.nothrow().quiet();
+ return result.exitCode === 0;
+ } catch {
+ return false;
+ }
+ }
+
+ /**
+ * Checks if a specific Hatch environment exists.
+ */
+ async function isHatchEnvAvailable(envName) {
+ try {
+ const result = await $`hatch env show`.nothrow().quiet();
+ if (result.exitCode !== 0) return false;
+ return result.stdout.toString().includes(envName);
+ } catch {
+ return false;
+ }
+ }
+
+ /**
+ * Truncates output to maximum number of lines with truncation notice.
+ */
+ function truncateOutput(output, linesMax = 50) {
+ const lines = output.split('\n');
+ if (lines.length <= linesMax) return output;
+ const linesToDisplay = lines.slice(0, linesMax);
+ const truncationsCount = lines.length - linesMax;
+ linesToDisplay.push(
+ `\n[OUTPUT TRUNCATED: ${truncationsCount} additional lines omitted. ` +
+ `Fix the issues above to see remaining diagnostics.]`
+ );
+ return linesToDisplay.join('\n');
+ }
+
+ /**
+ * Runs a command with timeout using Promise.race.
+ */
+ async function runCommandWithTimeout(command, timeoutMs) {
+ const timeoutPromise = new Promise((_, reject) => {
+ setTimeout(() => reject(new Error(`Command timed out after ${timeoutMs}ms`)), timeoutMs);
+ });
+
+ try {
+ const commandPromise = (async () => {
+ try {
+ // Use $ as tagged template function with shell execution
+ // Pass the entire command as a shell command
+ const result = await $`sh -c "${command}"`.nothrow().quiet();
+ return {
+ exitCode: result.exitCode,
+ stdout: result.stdout?.toString() || '',
+ stderr: result.stderr?.toString() || ''
+ };
+ } catch (error) {
+ return {
+ exitCode: error.exitCode || 1,
+ stdout: error.stdout?.toString() || '',
+ stderr: error.stderr?.toString() || error.message || ''
+ };
+ }
+ })();
+
+ return await Promise.race([commandPromise, timeoutPromise]);
+ } catch (error) {
+ return {
+ exitCode: 1,
+ stdout: '',
+ stderr: error.message || 'Command execution failed'
+ };
+ }
+ }
+
+ return {
+ "tool.execute.after": async (input, output) => {
+ // Only run for edit tool
+ if (input.tool !== "edit") return;
+
+ // Get file path from output (not input!)
+ const filePath = output?.metadata?.filediff?.file;
+ if (!filePath) {
+ // No file path in output, can't run linters
+ return;
+ }
+
+ // Check if hatch command is available
+ if (!(await isCommandAvailable('hatch'))) {
+ return; // Early exit if hatch not available
+ }
+
+ // Check if develop Hatch environment exists
+ if (!(await isHatchEnvAvailable('develop'))) {
+ return; // Early exit if develop environment not available
+ }
+
+ try {
+ // Run linters with 60 second timeout (matches Python script)
+ const result = await runCommandWithTimeout(
+ 'hatch --env develop run linters',
+ 60000
+ );
+
+ if (result.exitCode !== 0) {
+ // Combine stdout and stderr since linting output may go to stdout
+ const resultText = `${result.stdout}\n\n${result.stderr}`.trim();
+ const truncatedOutput = truncateOutput(resultText);
+
+ // Throw error to show linter failures
+ throw new Error(`Linters failed for ${filePath}:\n${truncatedOutput}`);
+ }
+ } catch (error) {
+ // Re-throw the error with proper message
+ if (error.message.includes('Command timed out')) {
+ throw new Error(`Linter execution timed out for ${filePath}: ${error.message}`);
+ }
+ throw error;
+ }
+ }
+ };
+};
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/opencode/plugin/python-environment-guard.js-disabled b/.auxiliary/configuration/coders/opencode/plugin/python-environment-guard.js-disabled
new file mode 100644
index 0000000..d27a89c
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/plugin/python-environment-guard.js-disabled
@@ -0,0 +1,149 @@
+/**
+ * Opencode plugin to detect improper Python usage in Bash commands.
+ * Port of Claude Code hook: template/.auxiliary/configuration/coders/claude/scripts/pre-bash-python-check
+ */
+import { split } from 'shlex';
+
+export const PythonEnvironmentGuard = async ({ project, client, $, directory, worktree }) => {
+ const SPLITTERS = new Set([';', '&', '|', '&&', '||']);
+
+ /**
+ * Checks if token is a Python command.
+ */
+ function isPythonCommand(token) {
+ return (
+ token === 'python' ||
+ token === 'python3' ||
+ token.startsWith('python3.')
+ );
+ }
+
+ /**
+ * Checks if token is a Python development tool.
+ */
+ function isPythonTool(token) {
+ return ['coverage', 'pyright', 'pytest', 'ruff'].includes(token);
+ }
+
+ /**
+ * Checks if Python -c argument contains multiline code.
+ */
+ function checkPythonCArgument(tokens, pythonIndex) {
+ for (let j = pythonIndex + 1; j < tokens.length; j++) {
+ if (tokens[j] === '-c' && j + 1 < tokens.length) {
+ const cArgument = tokens[j + 1];
+ return cArgument.includes('\n');
+ }
+ if (!tokens[j].startsWith('-')) {
+ // Non-option argument, stop looking for -c
+ break;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Checks for direct python usage patterns.
+ */
+ function checkDirectPythonUsage(tokens) {
+ const emessage = (
+ "Warning: Direct Python usage detected in command.\n" +
+ "Consider using 'hatch run python' or " +
+ "'hatch --env develop run python' to ensure dependencies " +
+ "are available."
+ );
+
+ for (const token of tokens) {
+ if (token === 'hatch') return;
+ if (isPythonCommand(token)) {
+ throw new Error(emessage);
+ }
+ }
+ }
+
+ /**
+ * Checks for multi-line python -c scripts using shlex parsing.
+ */
+ function checkMultilinePythonC(tokens) {
+ const emessage = (
+ "Warning: Multi-line Python script detected in command.\n" +
+ "Consider writing the script to a file " +
+ "in the '.auxiliary/scribbles' directory " +
+ "instead of using 'python -c' with multi-line code."
+ );
+
+ for (let i = 0; i < tokens.length; i++) {
+ const token = tokens[i];
+ if (isPythonCommand(token) && checkPythonCArgument(tokens, i)) {
+ throw new Error(emessage);
+ }
+ }
+ }
+
+ /**
+ * Checks for direct usage of Python tools outside Hatch environment.
+ */
+ function checkDirectToolUsage(tokens) {
+ for (const token of tokens) {
+ if (token === 'hatch') return;
+ if (isPythonTool(token)) {
+ const emessage = (
+ `Warning: Direct Python tool usage detected in command.\n` +
+ `Use 'hatch --env develop run ${token}' instead to ensure ` +
+ `proper environment and configuration.`
+ );
+ throw new Error(emessage);
+ }
+ }
+ }
+
+ /**
+ * Partitions command line into separate commands using shell splitters.
+ */
+ function partitionCommandLine(commandLine) {
+ // Use shlex.split for proper shell parsing (matches Python hook)
+ const tokens = split(commandLine);
+
+ // Now partition by shell splitters
+ const commands = [];
+ let commandTokens = [];
+
+ for (const token of tokens) {
+ if (SPLITTERS.has(token)) {
+ if (commandTokens.length > 0) {
+ commands.push(commandTokens);
+ commandTokens = [];
+ }
+ continue;
+ }
+ commandTokens.push(token);
+ }
+
+ if (commandTokens.length > 0) {
+ commands.push(commandTokens);
+ }
+
+ return commands;
+ }
+
+ return {
+ "tool.execute.before": async (input, output) => {
+ // Only run for bash tool
+ if (input.tool !== "bash") return;
+
+ // Extract command from input
+ const command = input.args?.command || '';
+ if (!command) return;
+
+ // Partition command line into separate commands
+ const commands = partitionCommandLine(command);
+
+ // Check each command for Python usage issues
+ for (const commandTokens of commands) {
+ checkDirectPythonUsage(commandTokens);
+ checkMultilinePythonC(commandTokens);
+ checkDirectToolUsage(commandTokens);
+ }
+ }
+ };
+};
\ No newline at end of file
diff --git a/.auxiliary/configuration/coders/opencode/settings.jsonc b/.auxiliary/configuration/coders/opencode/settings.jsonc
new file mode 100644
index 0000000..6636bce
--- /dev/null
+++ b/.auxiliary/configuration/coders/opencode/settings.jsonc
@@ -0,0 +1,99 @@
+{
+ "$schema": "https://opencode.ai/config.json",
+
+ "agent": {
+ "build": {
+ "mode": "primary",
+ // "model": "zai-coding-plan/glm-4.6"
+ "model": "deepseek/deepseek-chat"
+ },
+ "plan": {
+ "mode": "primary",
+ // "model": "zai-coding-plan/glm-4.6"
+ "model": "deepseek/deepseek-chat"
+ }
+ },
+
+ "mcp": {
+ "pyright": {
+ "type": "local",
+ "command": ["mcp-language-server", "--lsp", "pyright-langserver", "--workspace", ".", "--", "--stdio"],
+ "enabled": true
+ },
+ "context7": {
+ "type": "local",
+ "command": ["npx", "-y", "@upstash/context7-mcp"],
+ "enabled": true
+ },
+ "librovore": {
+ "type": "local",
+ "command": ["uvx", "librovore", "serve"],
+ "enabled": true
+ }
+ },
+
+ "permission": {
+ "bash": {
+ "*": "ask",
+ "hatch run *": "allow",
+ "hatch --env develop run *": "allow",
+ "awk *": "allow",
+ "cat *": "allow",
+ "cut *": "allow",
+ "df *": "allow",
+ "du *": "allow",
+ "echo *": "allow",
+ "file *": "allow",
+ "find *": "allow",
+ "gh browse *": "allow",
+ "gh issue list *": "allow",
+ "gh issue view *": "allow",
+ "gh pr checks *": "allow",
+ "gh pr list *": "allow",
+ "gh pr view *": "allow",
+ "gh release list *": "allow",
+ "gh release view *": "allow",
+ "gh repo list *": "allow",
+ "gh repo view *": "allow",
+ "gh run list *": "allow",
+ "gh run view *": "allow",
+ "gh run watch *": "allow",
+ "gh status *": "allow",
+ "git add *": "allow",
+ "git branch *": "allow",
+ "git diff *": "allow",
+ "git log *": "allow",
+ "git show *": "allow",
+ "git status *": "allow",
+ "grep *": "allow",
+ "head *": "allow",
+ "ls *": "allow",
+ "ps *": "allow",
+ "pwd *": "allow",
+ "rg *": "allow",
+ "sed *": "allow",
+ "sort *": "allow",
+ "tail *": "allow",
+ "uniq *": "allow",
+ "wc *": "allow",
+ "which *": "allow"
+ },
+ "edit": "allow",
+ "webfetch": "ask"
+ },
+
+ "formatter": {
+ "ruff": {
+ "disabled": true
+ },
+ "prettier": {
+ "disabled": true
+ }
+ },
+
+ "lsp": {
+ "pyright": {
+ "disabled": true
+ }
+ }
+}
diff --git a/.auxiliary/configuration/coders/qwen/.gitignore b/.auxiliary/configuration/coders/qwen/.gitignore
new file mode 100644
index 0000000..ad917dd
--- /dev/null
+++ b/.auxiliary/configuration/coders/qwen/.gitignore
@@ -0,0 +1,4 @@
+# Generated content for Qwen Code
+# DO NOT commit generated agent and command files
+agents/
+commands/
diff --git a/.auxiliary/configuration/coders/qwen/settings.json b/.auxiliary/configuration/coders/qwen/settings.json
new file mode 100644
index 0000000..a4d1e74
--- /dev/null
+++ b/.auxiliary/configuration/coders/qwen/settings.json
@@ -0,0 +1,67 @@
+{
+ "mcpServers": {
+ "context7": {
+ "command": "npx",
+ "args": ["-y", "@upstash/context7-mcp"]
+ },
+ "librovore": {
+ "command": "uvx",
+ "args": ["librovore", "serve"]
+ },
+ "pyright": {
+ "command": "mcp-language-server",
+ "args": [
+ "--lsp", "pyright-langserver", "--workspace", ".",
+ "--", "--stdio"
+ ]
+ }
+ },
+
+ "coreTools": [
+ "run_shell_command",
+ "run_shell_command(awk)",
+ "run_shell_command(cat)",
+ "run_shell_command(cut)",
+ "run_shell_command(df)",
+ "run_shell_command(du)",
+ "run_shell_command(echo)",
+ "run_shell_command(file)",
+ "run_shell_command(find)",
+ "run_shell_command(gh)",
+ "run_shell_command(git)",
+ "run_shell_command(grep)",
+ "run_shell_command(hatch)",
+ "run_shell_command(head)",
+ "run_shell_command(ls)",
+ "run_shell_command(ps)",
+ "run_shell_command(pwd)",
+ "run_shell_command(rg)",
+ "run_shell_command(sed)",
+ "run_shell_command(sort)",
+ "run_shell_command(tail)",
+ "run_shell_command(uniq)",
+ "run_shell_command(wc)",
+ "run_shell_command(which)",
+ "read_file",
+ "write_file",
+ "edit",
+ "list_directory",
+ "glob",
+ "search_file_content",
+ "todo_write",
+ "web_fetch",
+ "web_search",
+ "mcp__context7__resolve-library-id",
+ "mcp__context7__get-library-docs",
+ "mcp__pyright__definition",
+ "mcp__pyright__diagnostics",
+ "mcp__pyright__edit_file",
+ "mcp__pyright__hover",
+ "mcp__pyright__references",
+ "mcp__pyright__rename_symbol"
+ ],
+
+ "approvalMode": "auto-edit",
+ "autoAccept": true,
+ "showLineNumbers": true
+}
diff --git a/.auxiliary/configuration/copier-answers--agents.yaml b/.auxiliary/configuration/copier-answers--agents.yaml
new file mode 100644
index 0000000..b62b367
--- /dev/null
+++ b/.auxiliary/configuration/copier-answers--agents.yaml
@@ -0,0 +1,17 @@
+# Changes here will be overwritten by Copier
+_commit: v1.0a7-32-gc9caedf
+_src_path: gh:emcd/agents-common
+coders:
+- claude
+- gemini
+- opencode
+instructions_sources:
+- files:
+ '*.rst':
+ strip_header_lines: 20
+ source: github:emcd/python-project-common@docs-1#documentation/common
+instructions_target: .auxiliary/instructions
+languages:
+- python
+project_name: python-detextive
+provide_instructions: true
diff --git a/.auxiliary/configuration/copier-answers.yaml b/.auxiliary/configuration/copier-answers.yaml
index 4fdddff..6152979 100644
--- a/.auxiliary/configuration/copier-answers.yaml
+++ b/.auxiliary/configuration/copier-answers.yaml
@@ -1,5 +1,5 @@
# Changes here will be overwritten by Copier
-_commit: v1.40
+_commit: v1.57.1
_src_path: gh:emcd/python-project-common
author_email: emcd@users.noreply.github.com
author_name: Eric McDonald
@@ -18,10 +18,12 @@ package_name: detextive
project_name: python-detextive
pypy_versions:
- '3.10'
+- '3.11'
python_version_min: '3.10'
python_versions:
- '3.10'
- '3.11'
- '3.12'
- '3.13'
+- '3.14'
year_of_origin: 2025
diff --git a/.auxiliary/configuration/gemini/settings.json b/.auxiliary/configuration/gemini/settings.json
deleted file mode 100644
index 9f48e88..0000000
--- a/.auxiliary/configuration/gemini/settings.json
+++ /dev/null
@@ -1,31 +0,0 @@
-{
- "mcpServers": {
- "context7": {
- "command": "npx",
- "args": [ "-y", "@upstash/context7-mcp" ]
- },
- "pyright": {
- "command": "mcp-language-server",
- "args": [
- "--lsp",
- "pyright-langserver",
- "--workspace",
- ".",
- "--",
- "--stdio"
- ]
- },
- "ruff": {
- "command": "mcp-language-server",
- "args": [
- "--lsp",
- "ruff",
- "--workspace",
- ".",
- "--",
- "server",
- "--preview"
- ]
- }
- }
-}
diff --git a/.auxiliary/configuration/hatch-constraints.pip b/.auxiliary/configuration/hatch-constraints.pip
new file mode 100644
index 0000000..c5dc974
--- /dev/null
+++ b/.auxiliary/configuration/hatch-constraints.pip
@@ -0,0 +1,2 @@
+# Pip constraints file for Hatch installation
+click<8.3.0 # https://github.com/pypa/hatch/issues/2050
diff --git a/.auxiliary/configuration/mcp-servers.json b/.auxiliary/configuration/mcp-servers.json
index 6ae002a..5cde68b 100644
--- a/.auxiliary/configuration/mcp-servers.json
+++ b/.auxiliary/configuration/mcp-servers.json
@@ -3,31 +3,17 @@
"pyright": {
"command": "mcp-language-server",
"args": [
- "--lsp",
- "pyright-langserver",
- "--workspace",
- ".",
- "--",
- "--stdio"
+ "--lsp", "pyright-langserver", "--workspace", ".",
+ "--", "--stdio"
]
},
- "ruff": {
- "command": "mcp-language-server",
- "args": [
- "--lsp",
- "ruff",
- "--workspace",
- ".",
- "--",
- "server",
- "--preview"
- ]
+ "context7": {
+ "command": "npx",
+ "args": [ "-y", "@upstash/context7-mcp" ]
},
- "text-editor": {
+ "librovore": {
"command": "uvx",
- "args": [
- "mcp-text-editor"
- ]
+ "args": [ "librovore", "serve" ]
}
}
}
diff --git a/.auxiliary/configuration/pre-commit.yaml b/.auxiliary/configuration/pre-commit.yaml
index b25f5b9..9d60d42 100644
--- a/.auxiliary/configuration/pre-commit.yaml
+++ b/.auxiliary/configuration/pre-commit.yaml
@@ -2,11 +2,12 @@
# See https://pre-commit.com/hooks.html for more hooks
default_install_hook_types: [ 'pre-commit', 'pre-push' ]
+exclude: ^\.auxiliary/pocs
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
- rev: v5.0.0
+ rev: v6.0.0
hooks:
- id: check-added-large-files
name: 'Check: Large Files'
@@ -40,7 +41,7 @@ repos:
name: 'Check: Debug Statements (Python)'
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.12.1
+ rev: v0.14.3
hooks:
- id: ruff
name: 'Lint: Ruff'
@@ -49,6 +50,15 @@ repos:
- repo: local
hooks:
+ - id: hatch-vulture
+ name: 'Lint: Vulture'
+ stages: [ 'pre-commit' ]
+ fail_fast: true
+ language: system
+ always_run: true
+ pass_filenames: false
+ entry: 'hatch --env develop run vulture'
+
- id: hatch-pytest
name: 'Test Code Units (Python)'
stages: [ 'pre-commit' ] # push is covered below
diff --git a/.auxiliary/configuration/vulturefood.py b/.auxiliary/configuration/vulturefood.py
new file mode 100644
index 0000000..97e516c
--- /dev/null
+++ b/.auxiliary/configuration/vulturefood.py
@@ -0,0 +1,45 @@
+ComparisonResult # unused variable
+NominativeArguments # unused variable
+PositionalArguments # unused variable
+package_name # unused variable
+
+# --- BEGIN: Injected by Copier ---
+Omnierror # unused base exception class for derivation
+# --- END: Injected by Copier ---
+
+# Refactor 2.0 - public API functions not yet exposed in __init__.py
+detect_charset # public API function
+detect_mimetype # public API function
+infer_charset # public API function
+infer_mimetype_charset # public API function
+is_valid_text # public API function
+
+# Exception classes for public API
+TextualMimetypeInvalidity # exception class for public API
+
+# Core enums
+Error # variant
+
+# LineSeparators enum methods - public API
+detect_bytes # LineSeparators class method
+detect_text # LineSeparators class method
+normalize_universal # LineSeparators class method
+normalize # LineSeparators instance method
+nativize # LineSeparators instance method
+
+# Function parameters - used in signatures
+mimetype_default # function parameter
+
+# Validation profiles - public API constants
+PROFILE_PRINTER_SAFE # public validation profile
+PROFILE_TERMINAL_SAFE # public validation profile
+PROFILE_TERMINAL_SAFE_ANSI # public validation profile
+
+# Confidence system - planned for v2.0
+DetectionResult # confidence result dataclass
+confidence # DetectionResult field
+detect_charset_candidates # public API function for confidence-based detection
+detect_mimetype_candidates # public API function for confidence-based detection
+text_validate_confidence # Behaviors field for confidence thresholds
+trial_codecs # Behaviors field (renamed from charset_trial_codecs)
+trial_decode_confidence # Behaviors field for confidence thresholds
diff --git a/.auxiliary/data/towncrier/+binary-rejection.repair.rst b/.auxiliary/data/towncrier/+binary-rejection.repair.rst
new file mode 100644
index 0000000..188bd1b
--- /dev/null
+++ b/.auxiliary/data/towncrier/+binary-rejection.repair.rst
@@ -0,0 +1 @@
+Reject binary content with non-textual MIME types instead of attempting to decode, preventing false positives where binary data was incorrectly decoded as text.
\ No newline at end of file
diff --git a/.auxiliary/data/towncrier/+detection.enhance.rst b/.auxiliary/data/towncrier/+detection.enhance.rst
deleted file mode 100644
index edd1a9d..0000000
--- a/.auxiliary/data/towncrier/+detection.enhance.rst
+++ /dev/null
@@ -1,3 +0,0 @@
-Provide ``detect_charset``, ``detect_mimetype``,
-``detect_charset_and_mimetype``, ``is_textual_mimetype``, and
-``is_textual_content``.
diff --git a/.auxiliary/data/towncrier/+separators.enhance.rst b/.auxiliary/data/towncrier/+separators.enhance.rst
deleted file mode 100644
index a9ddfad..0000000
--- a/.auxiliary/data/towncrier/+separators.enhance.rst
+++ /dev/null
@@ -1,2 +0,0 @@
-Provide ``LineSeparators`` enum with detection, normalization, and nativization
-methods.
diff --git a/.auxiliary/data/towncrier/+utf8-detection.repair.rst b/.auxiliary/data/towncrier/+utf8-detection.repair.rst
new file mode 100644
index 0000000..471491f
--- /dev/null
+++ b/.auxiliary/data/towncrier/+utf8-detection.repair.rst
@@ -0,0 +1 @@
+Fix UTF-8 content incorrectly decoded when charset detector misidentifies encoding, causing mojibake with non-ASCII characters and emoji.
\ No newline at end of file
diff --git a/.auxiliary/evaluations/compare-charset-detectors.py b/.auxiliary/evaluations/compare-charset-detectors.py
new file mode 100644
index 0000000..bda3918
--- /dev/null
+++ b/.auxiliary/evaluations/compare-charset-detectors.py
@@ -0,0 +1,256 @@
+#!/usr/bin/env python3
+# vim: set filetype=python fileencoding=utf-8:
+# -*- coding: utf-8 -*-
+# ruff: noqa
+
+"""
+Compare chardet vs charset-normalizer detection behavior.
+
+Evaluates both detectors on various byte patterns to determine:
+1. Which normalizes to more standard/practical encodings
+2. Detection confidence levels
+3. Handling of edge cases (binary, ambiguous, empty)
+4. Performance characteristics
+"""
+
+import time
+from typing import Any
+
+try:
+ import chardet
+except ImportError:
+ chardet = None
+
+try:
+ import charset_normalizer
+except ImportError:
+ charset_normalizer = None
+
+
+# Test patterns covering various scenarios
+TEST_PATTERNS = {
+ # UTF-8 variants
+ 'utf8_basic': b'Hello, world!',
+ 'utf8_accents': b'Caf\xc3\xa9 \xc3\xa0 Paris',
+ 'utf8_emoji': b'Hello \xf0\x9f\x91\x8b world \xf0\x9f\x8c\x8d',
+ 'utf8_cjk': b'\xe4\xb8\xad\xe6\x96\x87', # Chinese characters
+ 'utf8_arabic': b'\xd8\xa7\xd9\x84\xd8\xb9\xd8\xb1\xd8\xa8\xd9\x8a\xd8\xa9',
+ 'utf8_mixed': b'Mix: \xc3\xa9 \xe2\x98\x85 \xf0\x9f\x8e\x89',
+
+ # UTF-16 with BOM
+ 'utf16_le_bom': b'\xff\xfeH\x00e\x00l\x00l\x00o\x00',
+ 'utf16_be_bom': b'\xfe\xff\x00H\x00e\x00l\x00l\x00o',
+
+ # ISO-8859-1 / Latin-1
+ 'latin1': b'Caf\xe9 \xe0 Paris', # Valid Latin-1, invalid UTF-8
+ 'latin1_extended': b'\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9',
+
+ # Windows-1252
+ 'cp1252': b'Smart quotes: \x93Hello\x94 \x96 Em dash',
+
+ # ASCII
+ 'ascii': b'Plain ASCII text without special characters',
+ 'ascii_with_newlines': b'Line 1\nLine 2\r\nLine 3\rLine 4',
+
+ # ISO-8859-2 (Central European)
+ 'latin2': b'\xb1\xb6\xbe', # Polish characters
+
+ # KOI8-R (Russian)
+ 'koi8r': b'\xf0\xd2\xc9\xd7\xc5\xd4', # Cyrillic
+
+ # Shift-JIS (Japanese)
+ 'shiftjis': b'\x82\xb1\x82\xf1\x82\xc9\x82\xbf\x82\xcd',
+
+ # Edge cases
+ 'empty': b'',
+ 'single_byte': b'A',
+ 'null_bytes': b'\x00\x00\x00\x00',
+ 'high_bytes': b'\xff\xfe\xfd\xfc\xfb',
+
+ # Binary-like patterns
+ 'binary_png': b'\x89PNG\r\n\x1a\n',
+ 'binary_pdf': b'%PDF-1.4',
+ 'binary_zip': b'PK\x03\x04',
+ 'binary_random': bytes(range(0, 256, 17)), # 0, 17, 34, ...
+
+ # Ambiguous cases (valid in multiple encodings)
+ 'ambiguous_simple': b'test', # ASCII, UTF-8, Latin-1, etc.
+ 'ambiguous_accents': b'\xe9\xe8\xe0', # Valid Latin-1 and Windows-1252
+}
+
+
+def detect_with_chardet(content: bytes) -> dict[str, Any]:
+ """Run chardet detection."""
+ if chardet is None:
+ return {'error': 'chardet not installed'}
+
+ start = time.perf_counter()
+ result = chardet.detect(content)
+ elapsed = time.perf_counter() - start
+
+ return {
+ 'encoding': result.get('encoding'),
+ 'confidence': result.get('confidence'),
+ 'language': result.get('language'),
+ 'time_ms': elapsed * 1000,
+ }
+
+
+def detect_with_charset_normalizer(content: bytes) -> dict[str, Any]:
+ """Run charset-normalizer detection."""
+ if charset_normalizer is None:
+ return {'error': 'charset-normalizer not installed'}
+
+ start = time.perf_counter()
+ results = charset_normalizer.from_bytes(content)
+ best = results.best()
+ elapsed = time.perf_counter() - start
+
+ if best is None:
+ return {
+ 'encoding': None,
+ 'confidence': 0.0,
+ 'time_ms': elapsed * 1000,
+ }
+
+ return {
+ 'encoding': best.encoding,
+ 'confidence': best.coherence, # 0.0-1.0 coherence score
+ 'language': getattr(best, 'language', None),
+ 'time_ms': elapsed * 1000,
+ 'coherence': best.coherence,
+ }
+
+
+def format_result(name: str, content: bytes, chardet_result: dict,
+ normalizer_result: dict) -> str:
+ """Format comparison results for display."""
+ lines = []
+ lines.append(f"\n{'=' * 70}")
+ lines.append(f"Pattern: {name}")
+ lines.append(f"Content: {content[:50]!r}" +
+ ('...' if len(content) > 50 else ''))
+ lines.append(f"Length: {len(content)} bytes")
+ lines.append('-' * 70)
+
+ # chardet results
+ lines.append("chardet:")
+ if 'error' in chardet_result:
+ lines.append(f" ERROR: {chardet_result['error']}")
+ else:
+ lines.append(f" Encoding: {chardet_result['encoding']}")
+ lines.append(f" Confidence: {chardet_result['confidence']:.2f}")
+ if chardet_result.get('language'):
+ lines.append(f" Language: {chardet_result['language']}")
+ lines.append(f" Time: {chardet_result['time_ms']:.3f} ms")
+
+ lines.append("")
+
+ # charset-normalizer results
+ lines.append("charset-normalizer:")
+ if 'error' in normalizer_result:
+ lines.append(f" ERROR: {normalizer_result['error']}")
+ else:
+ lines.append(f" Encoding: {normalizer_result['encoding']}")
+ lines.append(f" Confidence: {normalizer_result['confidence']:.2f}")
+ if normalizer_result.get('language'):
+ lines.append(f" Language: {normalizer_result['language']}")
+ if normalizer_result.get('coherence') is not None:
+ lines.append(f" Coherence: {normalizer_result['coherence']:.2f}")
+ lines.append(f" Time: {normalizer_result['time_ms']:.3f} ms")
+
+ # Comparison
+ lines.append('-' * 70)
+ if ('error' not in chardet_result and 'error' not in normalizer_result):
+ enc1 = chardet_result['encoding']
+ enc2 = normalizer_result['encoding']
+ if enc1 and enc2:
+ enc1_norm = enc1.lower().replace('-', '').replace('_', '')
+ enc2_norm = enc2.lower().replace('-', '').replace('_', '')
+ if enc1_norm == enc2_norm:
+ lines.append("✓ MATCH: Both detected same encoding")
+ else:
+ lines.append(f"✗ DIFFER: {enc1} vs {enc2}")
+
+ # Try to decode with each to see which works better
+ try:
+ text1 = content.decode(enc1)
+ lines.append(f" chardet decode: OK ({len(text1)} chars)")
+ except Exception as e:
+ lines.append(f" chardet decode: FAIL ({type(e).__name__})")
+
+ try:
+ text2 = content.decode(enc2)
+ lines.append(f" normalizer decode: OK ({len(text2)} chars)")
+ except Exception as e:
+ lines.append(f" normalizer decode: FAIL ({type(e).__name__})")
+ elif enc1 and not enc2:
+ lines.append("chardet detected, normalizer returned None")
+ elif enc2 and not enc1:
+ lines.append("normalizer detected, chardet returned None")
+ else:
+ lines.append("Both returned None")
+
+ return '\n'.join(lines)
+
+
+def main():
+ """Run comparison on all test patterns."""
+ print("=" * 70)
+ print("Charset Detector Comparison: chardet vs charset-normalizer")
+ print("=" * 70)
+
+ if chardet is None:
+ print("\n⚠ WARNING: chardet is not installed")
+ else:
+ print(f"\nchardet version: {getattr(chardet, '__version__', 'unknown')}")
+
+ if charset_normalizer is None:
+ print("⚠ WARNING: charset-normalizer is not installed")
+ else:
+ print(f"charset-normalizer version: "
+ f"{getattr(charset_normalizer, '__version__', 'unknown')}")
+
+ # Summary statistics
+ matches = 0
+ differs = 0
+ chardet_faster = 0
+ normalizer_faster = 0
+
+ for name, content in TEST_PATTERNS.items():
+ chardet_result = detect_with_chardet(content)
+ normalizer_result = detect_with_charset_normalizer(content)
+
+ print(format_result(name, content, chardet_result, normalizer_result))
+
+ # Track statistics
+ if ('error' not in chardet_result and
+ 'error' not in normalizer_result and
+ chardet_result['encoding'] and
+ normalizer_result['encoding']):
+ enc1 = chardet_result['encoding'].lower().replace('-', '').replace('_', '')
+ enc2 = normalizer_result['encoding'].lower().replace('-', '').replace('_', '')
+ if enc1 == enc2:
+ matches += 1
+ else:
+ differs += 1
+
+ if chardet_result['time_ms'] < normalizer_result['time_ms']:
+ chardet_faster += 1
+ else:
+ normalizer_faster += 1
+
+ # Print summary
+ print("\n" + "=" * 70)
+ print("SUMMARY")
+ print("=" * 70)
+ print(f"Total patterns tested: {len(TEST_PATTERNS)}")
+ print(f"Detections match: {matches}")
+ print(f"Detections differ: {differs}")
+ print(f"chardet faster: {chardet_faster}")
+ print(f"normalizer faster: {normalizer_faster}")
+ print("=" * 70)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/.auxiliary/evaluations/test-decode-accuracy.py b/.auxiliary/evaluations/test-decode-accuracy.py
new file mode 100644
index 0000000..b6f125e
--- /dev/null
+++ b/.auxiliary/evaluations/test-decode-accuracy.py
@@ -0,0 +1,282 @@
+#!/usr/bin/env python3
+# vim: set filetype=python fileencoding=utf-8:
+# -*- coding: utf-8 -*-
+# ruff: noqa
+
+"""
+Test decode accuracy: which detector produces better text for decoding?
+
+Creates test content in known encodings, then tests whether each detector
+correctly identifies the encoding and produces the expected decoded text.
+"""
+
+try:
+ import chardet
+except ImportError:
+ chardet = None
+
+try:
+ import charset_normalizer
+except ImportError:
+ charset_normalizer = None
+
+
+# Test cases: (text, encoding, description)
+TEST_CASES = [
+ # UTF-8 cases
+ ('Hello, world!', 'utf-8', 'Simple ASCII-compatible UTF-8'),
+ ('Café à Paris', 'utf-8', 'UTF-8 with accents'),
+ ('Hello 👋 world 🌍', 'utf-8', 'UTF-8 with emoji'),
+ ('中文测试', 'utf-8', 'UTF-8 Chinese'),
+ ('Привет мир', 'utf-8', 'UTF-8 Cyrillic'),
+ ('مرحبا', 'utf-8', 'UTF-8 Arabic'),
+ ('こんにちは世界', 'utf-8', 'UTF-8 Japanese'),
+
+ # Latin-1 / ISO-8859-1
+ ('Café à Paris', 'iso-8859-1', 'Latin-1 French'),
+ ('Mañana español', 'iso-8859-1', 'Latin-1 Spanish'),
+ ('Ñoño', 'iso-8859-1', 'Latin-1 with ñ'),
+
+ # Windows-1252
+ ('It\u2019s a \u201csmart\u201d test', 'windows-1252', 'Win1252 smart quotes'),
+ ('Price: \u20ac100', 'windows-1252', 'Win1252 Euro sign'),
+ ('Em\u2014dash test', 'windows-1252', 'Win1252 em dash'),
+
+ # ISO-8859-2 (Central European)
+ ('Zażółć gęślą jaźń', 'iso-8859-2', 'Polish text'),
+ ('Příliš žluťoučký', 'iso-8859-2', 'Czech text'),
+
+ # Multiple lines / structured text
+ ('Line 1: Café\nLine 2: naïve\nLine 3: élève', 'utf-8',
+ 'Multi-line UTF-8'),
+ ('# Comment\n\nCafé notes\n\nMore text.', 'utf-8',
+ 'UTF-8 with structure'),
+
+ # Realistic content
+ ('Café
', 'utf-8', 'HTML with UTF-8'),
+ ('{"name": "Café", "city": "Paris"}', 'utf-8', 'JSON with UTF-8'),
+ ('name,city\n"Café","Paris"\n', 'utf-8', 'CSV with UTF-8'),
+]
+
+
+def test_detection(original_text: str, encoding: str,
+ description: str) -> dict:
+ """Test detection and decoding for a known text/encoding pair."""
+ # Encode to bytes
+ try:
+ content = original_text.encode(encoding)
+ except (UnicodeEncodeError, LookupError) as e:
+ return {
+ 'error': f'Failed to encode: {e}',
+ 'description': description,
+ }
+
+ result = {
+ 'description': description,
+ 'original_text': original_text,
+ 'true_encoding': encoding,
+ 'content_length': len(content),
+ }
+
+ # Test chardet
+ if chardet:
+ detection = chardet.detect(content)
+ detected_encoding = detection.get('encoding')
+ confidence = detection.get('confidence')
+
+ result['chardet'] = {
+ 'detected': detected_encoding,
+ 'confidence': confidence,
+ }
+
+ if detected_encoding:
+ try:
+ decoded_text = content.decode(detected_encoding)
+ result['chardet']['decoded_text'] = decoded_text
+ result['chardet']['text_matches'] = (decoded_text == original_text)
+ result['chardet']['text_length'] = len(decoded_text)
+ except (UnicodeDecodeError, LookupError) as e:
+ result['chardet']['decode_error'] = str(e)
+ else:
+ result['chardet']['decoded_text'] = None
+ else:
+ result['chardet'] = {'error': 'not installed'}
+
+ # Test charset-normalizer
+ if charset_normalizer:
+ results = charset_normalizer.from_bytes(content)
+ best = results.best()
+
+ if best:
+ detected_encoding = best.encoding
+ confidence = best.coherence # 0.0-1.0 coherence score
+
+ result['normalizer'] = {
+ 'detected': detected_encoding,
+ 'confidence': confidence,
+ }
+
+ try:
+ decoded_text = content.decode(detected_encoding)
+ result['normalizer']['decoded_text'] = decoded_text
+ result['normalizer']['text_matches'] = (decoded_text == original_text)
+ result['normalizer']['text_length'] = len(decoded_text)
+ except (UnicodeDecodeError, LookupError) as e:
+ result['normalizer']['decode_error'] = str(e)
+ else:
+ result['normalizer'] = {
+ 'detected': None,
+ 'confidence': 0.0,
+ 'decoded_text': None,
+ }
+ else:
+ result['normalizer'] = {'error': 'not installed'}
+
+ return result
+
+
+def normalize_encoding_name(encoding: str) -> str:
+ """Normalize encoding name for comparison."""
+ return encoding.lower().replace('-', '').replace('_', '')
+
+
+def main():
+ """Run decode accuracy tests."""
+ print("=" * 70)
+ print("Decode Accuracy Test: chardet vs charset-normalizer")
+ print("=" * 70)
+ print("\nTests whether each detector correctly identifies encodings")
+ print("and produces the expected decoded text.\n")
+
+ if chardet is None:
+ print("⚠ WARNING: chardet is not installed\n")
+ if charset_normalizer is None:
+ print("⚠ WARNING: charset-normalizer is not installed\n")
+
+ results = []
+ for text, encoding, description in TEST_CASES:
+ result = test_detection(text, encoding, description)
+ results.append(result)
+
+ # Print detailed results
+ for i, result in enumerate(results, 1):
+ print(f"\n{'=' * 70}")
+ print(f"Test {i}: {result['description']}")
+ print(f"True encoding: {result['true_encoding']}")
+ print(f"Original text: {result['original_text']!r}")
+
+ if 'error' in result:
+ print(f"ERROR: {result['error']}")
+ continue
+
+ print(f"Content length: {result['content_length']} bytes")
+ print('-' * 70)
+
+ # chardet results
+ if 'error' not in result['chardet']:
+ cd = result['chardet']
+ print(f"chardet:")
+ print(f" Detected: {cd['detected']}")
+ print(f" Confidence: {cd['confidence']:.2f}")
+
+ if 'decode_error' in cd:
+ print(f" Decode: FAILED - {cd['decode_error']}")
+ elif cd['decoded_text'] is None:
+ print(f" Decode: No encoding detected")
+ else:
+ match_str = "✓ MATCH" if cd['text_matches'] else "✗ DIFFER"
+ print(f" Decode: {match_str}")
+ print(f" Result: {cd['decoded_text']!r}")
+ if not cd['text_matches']:
+ print(f" Length: {cd['text_length']} chars "
+ f"(expected {len(result['original_text'])})")
+
+ # charset-normalizer results
+ if 'error' not in result['normalizer']:
+ cn = result['normalizer']
+ print(f"\ncharset-normalizer:")
+ print(f" Detected: {cn['detected']}")
+ print(f" Confidence: {cn['confidence']:.2f}")
+
+ if 'decode_error' in cn:
+ print(f" Decode: FAILED - {cn['decode_error']}")
+ elif cn['decoded_text'] is None:
+ print(f" Decode: No encoding detected")
+ else:
+ match_str = "✓ MATCH" if cn['text_matches'] else "✗ DIFFER"
+ print(f" Decode: {match_str}")
+ print(f" Result: {cn['decoded_text']!r}")
+ if not cn['text_matches']:
+ print(f" Length: {cn['text_length']} chars "
+ f"(expected {len(result['original_text'])})")
+
+ # Comparison
+ if ('error' not in result['chardet'] and
+ 'error' not in result['normalizer']):
+ print('-' * 70)
+
+ cd_match = result['chardet'].get('text_matches', False)
+ cn_match = result['normalizer'].get('text_matches', False)
+
+ if cd_match and cn_match:
+ print("✓ Both produced correct text")
+ elif cn_match and not cd_match:
+ print("✓ BETTER: normalizer correct, chardet wrong")
+ elif cd_match and not cn_match:
+ print("✗ WORSE: chardet correct, normalizer wrong")
+ else:
+ print("✗ Both produced incorrect text")
+
+ # Summary statistics
+ print("\n" + "=" * 70)
+ print("SUMMARY")
+ print("=" * 70)
+
+ if chardet and charset_normalizer:
+ total = len([r for r in results if 'error' not in r])
+
+ cd_correct = sum(1 for r in results
+ if 'error' not in r
+ and 'error' not in r['chardet']
+ and r['chardet'].get('text_matches', False))
+ cd_failed = sum(1 for r in results
+ if 'error' not in r
+ and 'error' not in r['chardet']
+ and 'decode_error' in r['chardet'])
+
+ cn_correct = sum(1 for r in results
+ if 'error' not in r
+ and 'error' not in r['normalizer']
+ and r['normalizer'].get('text_matches', False))
+ cn_failed = sum(1 for r in results
+ if 'error' not in r
+ and 'error' not in r['normalizer']
+ and 'decode_error' in r['normalizer'])
+
+ print(f"Total valid tests: {total}")
+ print()
+ print(f"chardet:")
+ print(f" Correct: {cd_correct}/{total} "
+ f"({cd_correct/total*100:.1f}%)")
+ print(f" Decode failed: {cd_failed}")
+ print()
+ print(f"charset-normalizer:")
+ print(f" Correct: {cn_correct}/{total} "
+ f"({cn_correct/total*100:.1f}%)")
+ print(f" Decode failed: {cn_failed}")
+ print()
+
+ if cn_correct > cd_correct:
+ diff = cn_correct - cd_correct
+ print(f"✓ charset-normalizer is more accurate (+{diff} correct)")
+ elif cd_correct > cn_correct:
+ diff = cd_correct - cn_correct
+ print(f"✗ chardet is more accurate (+{diff} correct)")
+ else:
+ print("= Both have equal accuracy")
+
+ print("=" * 70)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/.auxiliary/evaluations/test-normalization-behavior.py b/.auxiliary/evaluations/test-normalization-behavior.py
new file mode 100644
index 0000000..44cabfd
--- /dev/null
+++ b/.auxiliary/evaluations/test-normalization-behavior.py
@@ -0,0 +1,249 @@
+#!/usr/bin/env python3
+# vim: set filetype=python fileencoding=utf-8:
+# -*- coding: utf-8 -*-
+# ruff: noqa
+
+"""
+Test charset normalization behavior.
+
+Specifically evaluates whether charset-normalizer prefers standard/practical
+encodings over obscure ones, compared to chardet.
+
+This addresses the concern: does charset-normalizer actually "normalize" to
+useful encodings like UTF-8, or does it detect rare encodings like MacRoman?
+"""
+
+try:
+ import chardet
+except ImportError:
+ chardet = None
+
+try:
+ import charset_normalizer
+except ImportError:
+ charset_normalizer = None
+
+
+# Standard/preferred encodings (in priority order)
+STANDARD_ENCODINGS = [
+ 'utf-8',
+ 'ascii',
+ 'iso-8859-1', # Latin-1
+ 'windows-1252', # Most common Windows encoding
+ 'iso-8859-2', # Central European
+ 'iso-8859-15', # Latin-9 (Euro sign)
+]
+
+# Obscure/problematic encodings that should be avoided
+OBSCURE_ENCODINGS = [
+ 'MacRoman',
+ 'MacCyrillic',
+ 'TIS-620', # Thai
+ 'IBM855',
+ 'IBM866',
+]
+
+
+def normalize_encoding_name(encoding: str | None) -> str:
+ """Normalize encoding name for comparison."""
+ if not encoding:
+ return ''
+ return encoding.lower().replace('-', '').replace('_', '')
+
+
+def classify_encoding(encoding: str | None) -> str:
+ """Classify encoding as standard, obscure, or unknown."""
+ if not encoding:
+ return 'none'
+
+ normalized = normalize_encoding_name(encoding)
+
+ # Check standard encodings
+ for std in STANDARD_ENCODINGS:
+ if normalize_encoding_name(std) == normalized:
+ return f'standard:{std}'
+
+ # Check obscure encodings
+ for obs in OBSCURE_ENCODINGS:
+ if normalize_encoding_name(obs) == normalized:
+ return f'obscure:{obs}'
+
+ return f'other:{encoding}'
+
+
+def test_pattern(name: str, content: bytes) -> dict:
+ """Test a pattern with both detectors and classify results."""
+ result = {
+ 'name': name,
+ 'content': content[:50],
+ 'length': len(content),
+ }
+
+ # chardet
+ if chardet:
+ detection = chardet.detect(content)
+ result['chardet'] = {
+ 'encoding': detection.get('encoding'),
+ 'confidence': detection.get('confidence'),
+ 'classification': classify_encoding(detection.get('encoding')),
+ }
+ else:
+ result['chardet'] = {'encoding': None, 'error': 'not installed'}
+
+ # charset-normalizer
+ if charset_normalizer:
+ results = charset_normalizer.from_bytes(content)
+ best = results.best()
+ if best:
+ result['normalizer'] = {
+ 'encoding': best.encoding,
+ 'confidence': best.coherence, # 0.0-1.0 coherence score
+ 'classification': classify_encoding(best.encoding),
+ }
+ else:
+ result['normalizer'] = {
+ 'encoding': None,
+ 'confidence': 0.0,
+ 'classification': 'none',
+ }
+ else:
+ result['normalizer'] = {'encoding': None, 'error': 'not installed'}
+
+ return result
+
+
+# Test cases specifically designed to trigger different detections
+NORMALIZATION_TESTS = {
+ # UTF-8 content that might be misdetected
+ 'utf8_short': b'Caf\xc3\xa9',
+ 'utf8_medium': b'Caf\xc3\xa9 \xc3\xa0 Paris avec \xc3\xa9l\xc3\xa9gance',
+ 'utf8_long': (b'The quick brown fox jumps over the lazy dog. '
+ b'Caf\xc3\xa9, na\xc3\xafve, \xc3\xa9l\xc3\xa8ve. ' * 3),
+
+ # ASCII-safe content (should stay ASCII, not escalate to UTF-8)
+ 'pure_ascii': b'Hello world, this is plain ASCII text.',
+ 'ascii_multiline': b'Line 1\nLine 2\nLine 3\nPlain text.',
+
+ # Latin-1 vs UTF-8 ambiguity
+ 'latin1_french': b'Caf\xe9 \xe0 Paris', # Valid Latin-1, invalid UTF-8
+ 'latin1_spanish': b'Ma\xf1ana espa\xf1ol',
+
+ # Windows-1252 specific characters
+ 'cp1252_quotes': b'It\x92s a \x93smart\x94 test',
+ 'cp1252_euro': b'Price: \x80100', # Euro sign in Windows-1252
+
+ # Content that could be MacRoman (test if normalizer avoids it)
+ 'potential_macroman': b'Caf\x8e', # é in MacRoman
+
+ # ISO-8859-2 (Central European)
+ 'latin2_polish': b'\xb3\xf3d\xbc', # Polish: łódź
+
+ # Mixed valid encodings (which is preferred?)
+ 'multi_valid_1': b'test', # Valid in many encodings
+ 'multi_valid_2': b'\xe9\xe8\xe0\xe7', # Valid Latin-1/Win1252
+
+ # Edge case: could be UTF-8 or 8-bit
+ 'ambiguous_high': b'\xc3\xa9\xc3\xa8', # Valid UTF-8 or Latin-1
+
+ # Realistic web content (should prefer UTF-8)
+ 'web_html': b'Caf\xc3\xa9',
+ 'web_json': b'{"name": "Caf\xc3\xa9", "city": "Paris"}',
+
+ # Realistic file content
+ 'text_file': b'# Comment\n\nCaf\xc3\xa9 notes\n\nMore text here.\n',
+}
+
+
+def main():
+ """Run normalization behavior tests."""
+ print("=" * 70)
+ print("Charset Normalization Behavior Test")
+ print("=" * 70)
+
+ if chardet is None:
+ print("\n⚠ WARNING: chardet is not installed\n")
+ if charset_normalizer is None:
+ print("⚠ WARNING: charset-normalizer is not installed\n")
+
+ results = []
+ for name, content in NORMALIZATION_TESTS.items():
+ result = test_pattern(name, content)
+ results.append(result)
+
+ # Print detailed results
+ for result in results:
+ print(f"\n{'=' * 70}")
+ print(f"Test: {result['name']}")
+ print(f"Content: {result['content']!r}...")
+ print(f"Length: {result['length']} bytes")
+ print('-' * 70)
+
+ if 'error' not in result['chardet']:
+ cd = result['chardet']
+ print(f"chardet: {cd['encoding']:20} "
+ f"[{cd['confidence']:.2f}] {cd['classification']}")
+
+ if 'error' not in result['normalizer']:
+ cn = result['normalizer']
+ print(f"charset-normalizer: {cn['encoding']:20} "
+ f"[{cn['confidence']:.2f}] {cn['classification']}")
+
+ # Analysis
+ if ('error' not in result['chardet'] and
+ 'error' not in result['normalizer']):
+ cd_class = result['chardet']['classification']
+ cn_class = result['normalizer']['classification']
+
+ if cd_class.startswith('obscure') and cn_class.startswith('standard'):
+ print("\n✓ BETTER: normalizer chose standard over obscure")
+ elif cd_class.startswith('standard') and cn_class.startswith('obscure'):
+ print("\n✗ WORSE: normalizer chose obscure over standard")
+ elif cd_class == cn_class:
+ print("\n= SAME: Both chose same classification")
+ else:
+ print(f"\n? DIFFERENT: {cd_class} vs {cn_class}")
+
+ # Summary statistics
+ print("\n" + "=" * 70)
+ print("SUMMARY")
+ print("=" * 70)
+
+ if chardet and charset_normalizer:
+ chardet_standard = sum(1 for r in results
+ if r['chardet']['classification'].startswith('standard'))
+ chardet_obscure = sum(1 for r in results
+ if r['chardet']['classification'].startswith('obscure'))
+
+ norm_standard = sum(1 for r in results
+ if r['normalizer']['classification'].startswith('standard'))
+ norm_obscure = sum(1 for r in results
+ if r['normalizer']['classification'].startswith('obscure'))
+
+ print(f"Total tests: {len(results)}")
+ print()
+ print(f"chardet - Standard encodings: {chardet_standard}")
+ print(f"chardet - Obscure encodings: {chardet_obscure}")
+ print()
+ print(f"normalizer - Standard: {norm_standard}")
+ print(f"normalizer - Obscure: {norm_obscure}")
+ print()
+
+ if norm_standard > chardet_standard:
+ print("✓ charset-normalizer prefers standard encodings more")
+ elif norm_standard < chardet_standard:
+ print("✗ chardet prefers standard encodings more")
+ else:
+ print("= Both prefer standard encodings equally")
+
+ if norm_obscure < chardet_obscure:
+ print("✓ charset-normalizer avoids obscure encodings more")
+ elif norm_obscure > chardet_obscure:
+ print("✗ charset-normalizer uses obscure encodings more")
+ else:
+ print("= Both use obscure encodings equally")
+
+ print("=" * 70)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/.auxiliary/notes/.gitkeep b/.auxiliary/notes/.gitkeep
new file mode 100644
index 0000000..e69de29
diff --git a/.auxiliary/notes/charset-detector-evaluation-results.md b/.auxiliary/notes/charset-detector-evaluation-results.md
new file mode 100644
index 0000000..679d3a1
--- /dev/null
+++ b/.auxiliary/notes/charset-detector-evaluation-results.md
@@ -0,0 +1,191 @@
+# Charset Detector Evaluation Results
+
+**Date**: 2025-11-12
+**Detectors tested**: chardet 5.2.0 vs charset-normalizer 3.4.4
+
+## Executive Summary
+
+Both detectors have strengths and weaknesses:
+- **charset-normalizer** is better at UTF-8 detection (fewer false positives)
+- **chardet** is better at 8-bit encodings (Latin-1, Windows-1252)
+- **Overall accuracy**: Tied at 65% on ground-truth tests
+- **Performance**: chardet is generally faster (19 vs 4 wins in speed tests)
+
+**Recommendation**: Consider using **both** detectors with fallback logic:
+1. Try charset-normalizer first for UTF-8 preference
+2. Fall back to chardet if low confidence or decode fails
+3. Apply `is_permissive_charset()` filtering to both
+
+## Detailed Findings
+
+### 1. UTF-8 Detection Quality
+
+**charset-normalizer wins decisively:**
+
+✓ **Better UTF-8 recognition**:
+- Correctly detected UTF-8 with emoji (chardet→Windows-1254 ✗)
+- Correctly detected UTF-8 in HTML (chardet→ISO-8859-9 ✗)
+- Correctly detected UTF-8 in JSON (chardet→ISO-8859-9 ✗)
+- Correctly detected UTF-8 in CSV (chardet→ISO-8859-9 ✗)
+- Correctly detected UTF-8 with structure (chardet→MacRoman ✗)
+
+✓ **Avoided obscure encodings**:
+- 0 obscure encoding detections vs chardet's 1 (MacRoman)
+
+✗ **But struggles with short UTF-8**:
+- Very short UTF-8 content sometimes misdetected as UTF-16-BE
+
+### 2. 8-bit Encoding Detection
+
+**chardet wins clearly:**
+
+✓ **Better 8-bit accuracy**:
+- Correctly detected Latin-1 French (normalizer→UTF-16-BE ✗)
+- Correctly detected Latin-1 Spanish (normalizer→CP1250 ✗)
+- Correctly detected Latin-1 Ñoño (normalizer→Big5 ✗)
+- Correctly detected Win1252 Euro sign (normalizer→CP1125 ✗)
+- Correctly detected Win1252 em dash (normalizer→UTF-16-BE ✗)
+
+✗ **charset-normalizer struggles with 8-bit**:
+- Often misdetects as UTF-16-BE or obscure Asian encodings
+- Less reliable for Latin-1, Windows-1252 content
+
+### 3. Performance Characteristics
+
+**chardet is faster**:
+- chardet faster: 19 tests
+- normalizer faster: 4 tests
+- Average chardet: ~0.1-0.5 ms for most tests
+- Average normalizer: ~0.5-15 ms (especially slow on ambiguous content)
+
+**charset-normalizer's slowness**:
+- Some tests took 13-15 ms (vs chardet's 0.1-0.4 ms)
+- Appears to do more extensive analysis
+
+### 4. "Normalization" Behavior
+
+**Mixed results:**
+
+✓ **charset-normalizer prefers UTF-8**:
+- More likely to detect UTF-8 for modern content
+- Good for web content, JSON, structured text
+
+✓ **Avoids truly obscure encodings**:
+- 0 MacRoman/MacCyrillic detections
+
+✗ **But uses non-standard encodings**:
+- Detected UTF-16-BE for short Latin-1 content (unusual)
+- Detected obscure Asian encodings (Big5, CP949) for ambiguous bytes
+- chardet detected more "standard" encodings overall (10 vs 9)
+
+### 5. Edge Cases
+
+**Empty content**:
+- chardet: None
+- normalizer: utf-8
+- **Winner**: normalizer (reasonable default)
+
+**Binary content**:
+- Both struggle, but chardet slightly better at staying ASCII
+- normalizer sometimes detects UTF-16-BE for binary
+
+**Ambiguous content**:
+- Both have issues with very short content (<10 bytes)
+- chardet tends toward 8-bit encodings
+- normalizer tends toward multi-byte encodings
+
+## Ground Truth Accuracy (20 tests)
+
+| Detector | Correct | Failed | Accuracy |
+|----------|---------|--------|----------|
+| chardet | 13 | 1 decode failure | 65% |
+| charset-normalizer | 13 | 0 decode failures | 65% |
+
+**Breakdown by encoding family**:
+
+**UTF-8 (12 tests)**:
+- chardet: 7/12 correct (58%)
+- normalizer: 11/12 correct (92%) ✓
+
+**Latin-1/Windows-1252 (6 tests)**:
+- chardet: 5/6 correct (83%) ✓
+- normalizer: 1/6 correct (17%)
+
+**ISO-8859-2 (2 tests)**:
+- chardet: 0/2 correct
+- normalizer: 0/2 correct
+- (Both failed - very hard without more context)
+
+## Confidence Scores
+
+**chardet** provides meaningful confidence:
+- 0.0-1.0 range reflects detection quality
+- High confidence (>0.9) is reliable
+- Low confidence (<0.5) signals uncertainty
+
+**charset-normalizer** coherence is problematic:
+- Most results show 0.0 coherence, even for correct detections
+- Coherence ≠ confidence in traditional sense
+- Coherence measures text "readability" not detection certainty
+- Cannot use coherence as confidence threshold
+
+## Recommendation for Detextive
+
+### Proposed Strategy
+
+Use a **hybrid approach** with situational logic:
+
+```python
+def detect_charset_reliable(content, behaviors):
+ """Reliable charset detection using hybrid approach."""
+
+ # 1. Try charset-normalizer first (UTF-8 preference)
+ norm_result = detect_via_charset_normalizer(content)
+
+ # 2. If normalizer detected UTF-8 or other multi-byte, trust it
+ if norm_result.charset and not is_permissive_charset(norm_result.charset):
+ return norm_result
+
+ # 3. For 8-bit or uncertain, try chardet
+ chardet_result = detect_via_chardet(content)
+
+ # 4. Apply logic:
+ # - If chardet detected multi-byte non-8-bit, prefer it
+ # - If chardet detected 8-bit, verify with trial decode
+ # - If both detected 8-bit, treat as uncertain
+
+ if chardet_result.charset and not is_permissive_charset(chardet_result.charset):
+ # chardet found informative charset
+ if chardet_result.confidence >= behaviors.charset_confidence_threshold:
+ return chardet_result
+
+ # 5. Fall back to defaults with trial decode
+ return try_defaults(content, behaviors)
+```
+
+### Why This Works
+
+1. **UTF-8 preference**: normalizer catches modern UTF-8 content that chardet misses
+2. **8-bit accuracy**: chardet catches Latin-1/Win1252 that normalizer mangles
+3. **Safety net**: `is_permissive_charset()` prevents accepting uninformative 8-bit
+4. **Confidence gating**: Only trust chardet when confidence is high
+
+### Alternative: Just Use chardet
+
+If hybrid is too complex, **stick with chardet**:
+- More consistent behavior across encoding types
+- Better confidence scores
+- Faster performance
+- We can compensate for UTF-8 issues with:
+ - Always trying UTF-8 first in trial decode
+ - Using shortest-wins heuristic
+ - Text validation
+
+## Test Scripts
+
+All test scripts available in `.auxiliary/scribbles/`:
+- `compare-charset-detectors.py` - General comparison
+- `test-normalization-behavior.py` - Standard vs obscure encodings
+- `test-decode-accuracy.py` - Ground truth accuracy testing
+
+Run with: `hatch --env develop run python .auxiliary/scribbles/