add brain
This commit is contained in:
@@ -0,0 +1,405 @@
|
||||
# Quality Scoring Rubric
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: 2026-02-16
|
||||
**Authority**: Claude Skills Engineering Team
|
||||
|
||||
## Overview
|
||||
|
||||
This document defines the comprehensive quality scoring methodology used to assess skills within the claude-skills ecosystem. The scoring system evaluates four key dimensions, each weighted equally at 25%, to provide an objective and consistent measure of skill quality.
|
||||
|
||||
## Scoring Framework
|
||||
|
||||
### Overall Scoring Scale
|
||||
- **A+ (95-100)**: Exceptional quality, exceeds all standards
|
||||
- **A (90-94)**: Excellent quality, meets highest standards consistently
|
||||
- **A- (85-89)**: Very good quality, minor areas for improvement
|
||||
- **B+ (80-84)**: Good quality, meets most standards well
|
||||
- **B (75-79)**: Satisfactory quality, meets standards adequately
|
||||
- **B- (70-74)**: Below average, several areas need improvement
|
||||
- **C+ (65-69)**: Poor quality, significant improvements needed
|
||||
- **C (60-64)**: Minimal acceptable quality, major improvements required
|
||||
- **C- (55-59)**: Unacceptable quality, extensive rework needed
|
||||
- **D (50-54)**: Very poor quality, fundamental issues present
|
||||
- **F (0-49)**: Failing quality, does not meet basic standards
|
||||
|
||||
### Dimension Weights
|
||||
Each dimension contributes equally to the overall score:
|
||||
- **Documentation Quality**: 25%
|
||||
- **Code Quality**: 25%
|
||||
- **Completeness**: 25%
|
||||
- **Usability**: 25%
|
||||
|
||||
## Documentation Quality (25% Weight)
|
||||
|
||||
### Scoring Components
|
||||
|
||||
#### SKILL.md Quality (40% of Documentation Score)
|
||||
**Component Breakdown:**
|
||||
- **Length and Depth (25%)**: Line count and content substance
|
||||
- **Frontmatter Quality (25%)**: Completeness and accuracy of YAML metadata
|
||||
- **Section Coverage (25%)**: Required and recommended section presence
|
||||
- **Content Depth (25%)**: Technical detail and comprehensiveness
|
||||
|
||||
**Scoring Criteria:**
|
||||
|
||||
| Score Range | Length | Frontmatter | Sections | Depth |
|
||||
|-------------|--------|-------------|----------|-------|
|
||||
| 90-100 | 400+ lines | All fields complete + extras | All required + 4+ recommended | Rich technical detail, examples |
|
||||
| 80-89 | 300-399 lines | All required fields complete | All required + 2-3 recommended | Good technical coverage |
|
||||
| 70-79 | 200-299 lines | Most required fields | All required + 1 recommended | Adequate technical content |
|
||||
| 60-69 | 150-199 lines | Some required fields | Most required sections | Basic technical information |
|
||||
| 50-59 | 100-149 lines | Minimal frontmatter | Some required sections | Limited technical detail |
|
||||
| Below 50 | <100 lines | Missing/invalid frontmatter | Few/no required sections | Insufficient content |
|
||||
|
||||
#### README.md Quality (25% of Documentation Score)
|
||||
**Scoring Criteria:**
|
||||
- **Excellent (90-100)**: 1000+ chars, comprehensive usage guide, examples, troubleshooting
|
||||
- **Good (75-89)**: 500-999 chars, clear usage instructions, basic examples
|
||||
- **Satisfactory (60-74)**: 200-499 chars, minimal usage information
|
||||
- **Poor (40-59)**: <200 chars or confusing content
|
||||
- **Failing (0-39)**: Missing or completely inadequate
|
||||
|
||||
#### Reference Documentation (20% of Documentation Score)
|
||||
**Scoring Criteria:**
|
||||
- **Excellent (90-100)**: Multiple comprehensive reference docs (2000+ chars total)
|
||||
- **Good (75-89)**: 2-3 reference files with substantial content
|
||||
- **Satisfactory (60-74)**: 1-2 reference files with adequate content
|
||||
- **Poor (40-59)**: Minimal reference content or poor quality
|
||||
- **Failing (0-39)**: No reference documentation
|
||||
|
||||
#### Examples and Usage Clarity (15% of Documentation Score)
|
||||
**Scoring Criteria:**
|
||||
- **Excellent (90-100)**: 5+ diverse examples, clear usage patterns
|
||||
- **Good (75-89)**: 3-4 examples covering different scenarios
|
||||
- **Satisfactory (60-74)**: 2-3 basic examples
|
||||
- **Poor (40-59)**: 1-2 minimal examples
|
||||
- **Failing (0-39)**: No examples or unclear usage
|
||||
|
||||
## Code Quality (25% Weight)
|
||||
|
||||
### Scoring Components
|
||||
|
||||
#### Script Complexity and Architecture (25% of Code Score)
|
||||
**Evaluation Criteria:**
|
||||
- Lines of code per script relative to tier requirements
|
||||
- Function and class organization
|
||||
- Code modularity and reusability
|
||||
- Algorithm sophistication
|
||||
|
||||
**Scoring Matrix:**
|
||||
|
||||
| Tier | Excellent (90-100) | Good (75-89) | Satisfactory (60-74) | Poor (Below 60) |
|
||||
|------|-------------------|--------------|---------------------|-----------------|
|
||||
| BASIC | 200-300 LOC, well-structured | 150-199 LOC, organized | 100-149 LOC, basic | <100 LOC, minimal |
|
||||
| STANDARD | 400-500 LOC, modular | 350-399 LOC, structured | 300-349 LOC, adequate | <300 LOC, basic |
|
||||
| POWERFUL | 600-800 LOC, sophisticated | 550-599 LOC, advanced | 500-549 LOC, solid | <500 LOC, simple |
|
||||
|
||||
#### Error Handling Quality (25% of Code Score)
|
||||
**Scoring Criteria:**
|
||||
- **Excellent (90-100)**: Comprehensive exception handling, specific error types, recovery mechanisms
|
||||
- **Good (75-89)**: Good exception handling, meaningful error messages, logging
|
||||
- **Satisfactory (60-74)**: Basic try/except blocks, simple error messages
|
||||
- **Poor (40-59)**: Minimal error handling, generic exceptions
|
||||
- **Failing (0-39)**: No error handling or inappropriate handling
|
||||
|
||||
**Error Handling Checklist:**
|
||||
- [ ] Try/except blocks for risky operations
|
||||
- [ ] Specific exception types (not just Exception)
|
||||
- [ ] Meaningful error messages for users
|
||||
- [ ] Proper error logging or reporting
|
||||
- [ ] Graceful degradation where possible
|
||||
- [ ] Input validation and sanitization
|
||||
|
||||
#### Code Structure and Organization (25% of Code Score)
|
||||
**Evaluation Elements:**
|
||||
- Function decomposition and single responsibility
|
||||
- Class design and inheritance patterns
|
||||
- Import organization and dependency management
|
||||
- Documentation and comments quality
|
||||
- Consistent naming conventions
|
||||
- PEP 8 compliance
|
||||
|
||||
**Scoring Guidelines:**
|
||||
- **Excellent (90-100)**: Exemplary structure, comprehensive docstrings, perfect style
|
||||
- **Good (75-89)**: Well-organized, good documentation, minor style issues
|
||||
- **Satisfactory (60-74)**: Adequate structure, basic documentation, some style issues
|
||||
- **Poor (40-59)**: Poor organization, minimal documentation, style problems
|
||||
- **Failing (0-39)**: No clear structure, no documentation, major style violations
|
||||
|
||||
#### Output Format Support (25% of Code Score)
|
||||
**Required Capabilities:**
|
||||
- JSON output format support
|
||||
- Human-readable output format
|
||||
- Proper data serialization
|
||||
- Consistent output structure
|
||||
- Error output handling
|
||||
|
||||
**Scoring Criteria:**
|
||||
- **Excellent (90-100)**: Dual format + custom formats, perfect serialization
|
||||
- **Good (75-89)**: Dual format support, good serialization
|
||||
- **Satisfactory (60-74)**: Single format well-implemented
|
||||
- **Poor (40-59)**: Basic output, formatting issues
|
||||
- **Failing (0-39)**: Poor or no structured output
|
||||
|
||||
## Completeness (25% Weight)
|
||||
|
||||
### Scoring Components
|
||||
|
||||
#### Directory Structure Compliance (25% of Completeness Score)
|
||||
**Required Directories by Tier:**
|
||||
- **BASIC**: scripts/ (required), assets/ + references/ (recommended)
|
||||
- **STANDARD**: scripts/ + assets/ + references/ (required), expected_outputs/ (recommended)
|
||||
- **POWERFUL**: scripts/ + assets/ + references/ + expected_outputs/ (all required)
|
||||
|
||||
**Scoring Calculation:**
|
||||
```
|
||||
Structure Score = (Required Present / Required Total) * 0.6 +
|
||||
(Recommended Present / Recommended Total) * 0.4
|
||||
```
|
||||
|
||||
#### Asset Availability and Quality (25% of Completeness Score)
|
||||
**Scoring Criteria:**
|
||||
- **Excellent (90-100)**: 5+ diverse assets, multiple file types, realistic data
|
||||
- **Good (75-89)**: 3-4 assets, some diversity, good quality
|
||||
- **Satisfactory (60-74)**: 2-3 assets, basic variety
|
||||
- **Poor (40-59)**: 1-2 minimal assets
|
||||
- **Failing (0-39)**: No assets or unusable assets
|
||||
|
||||
**Asset Quality Factors:**
|
||||
- File diversity (JSON, CSV, YAML, etc.)
|
||||
- Data realism and complexity
|
||||
- Coverage of use cases
|
||||
- File size appropriateness
|
||||
- Documentation of asset purpose
|
||||
|
||||
#### Expected Output Coverage (25% of Completeness Score)
|
||||
**Evaluation Criteria:**
|
||||
- Correspondence with asset files
|
||||
- Coverage of success and error scenarios
|
||||
- Output format variety
|
||||
- Reproducibility and accuracy
|
||||
|
||||
**Scoring Matrix:**
|
||||
- **Excellent (90-100)**: Complete output coverage, all scenarios, verified accuracy
|
||||
- **Good (75-89)**: Good coverage, most scenarios, mostly accurate
|
||||
- **Satisfactory (60-74)**: Basic coverage, main scenarios
|
||||
- **Poor (40-59)**: Minimal coverage, some inaccuracies
|
||||
- **Failing (0-39)**: No expected outputs or completely inaccurate
|
||||
|
||||
#### Test Coverage and Validation (25% of Completeness Score)
|
||||
**Assessment Areas:**
|
||||
- Sample data processing capability
|
||||
- Output verification mechanisms
|
||||
- Edge case handling
|
||||
- Error condition testing
|
||||
- Integration test scenarios
|
||||
|
||||
**Scoring Guidelines:**
|
||||
- **Excellent (90-100)**: Comprehensive test coverage, automated validation
|
||||
- **Good (75-89)**: Good test coverage, manual validation possible
|
||||
- **Satisfactory (60-74)**: Basic testing capability
|
||||
- **Poor (40-59)**: Minimal testing support
|
||||
- **Failing (0-39)**: No testing or validation capability
|
||||
|
||||
## Usability (25% Weight)
|
||||
|
||||
### Scoring Components
|
||||
|
||||
#### Installation and Setup Simplicity (25% of Usability Score)
|
||||
**Evaluation Factors:**
|
||||
- Dependency requirements (Python stdlib preferred)
|
||||
- Setup complexity
|
||||
- Environment requirements
|
||||
- Installation documentation clarity
|
||||
|
||||
**Scoring Criteria:**
|
||||
- **Excellent (90-100)**: Zero external dependencies, single-file execution
|
||||
- **Good (75-89)**: Minimal dependencies, simple setup
|
||||
- **Satisfactory (60-74)**: Some dependencies, documented setup
|
||||
- **Poor (40-59)**: Complex dependencies, unclear setup
|
||||
- **Failing (0-39)**: Unable to install or excessive complexity
|
||||
|
||||
#### Usage Clarity and Help Quality (25% of Usability Score)
|
||||
**Assessment Elements:**
|
||||
- Command-line help comprehensiveness
|
||||
- Usage example clarity
|
||||
- Parameter documentation quality
|
||||
- Error message helpfulness
|
||||
|
||||
**Help Quality Checklist:**
|
||||
- [ ] Comprehensive --help output
|
||||
- [ ] Clear parameter descriptions
|
||||
- [ ] Usage examples included
|
||||
- [ ] Error messages are actionable
|
||||
- [ ] Progress indicators where appropriate
|
||||
|
||||
**Scoring Matrix:**
|
||||
- **Excellent (90-100)**: Exemplary help, multiple examples, perfect error messages
|
||||
- **Good (75-89)**: Good help quality, clear examples, helpful errors
|
||||
- **Satisfactory (60-74)**: Adequate help, basic examples
|
||||
- **Poor (40-59)**: Minimal help, confusing interface
|
||||
- **Failing (0-39)**: No help or completely unclear interface
|
||||
|
||||
#### Documentation Accessibility (25% of Usability Score)
|
||||
**Evaluation Criteria:**
|
||||
- README quick start effectiveness
|
||||
- SKILL.md navigation and structure
|
||||
- Reference material organization
|
||||
- Learning curve considerations
|
||||
|
||||
**Accessibility Factors:**
|
||||
- Information hierarchy clarity
|
||||
- Cross-reference quality
|
||||
- Beginner-friendly explanations
|
||||
- Advanced user shortcuts
|
||||
- Troubleshooting guidance
|
||||
|
||||
#### Practical Example Quality (25% of Usability Score)
|
||||
**Assessment Areas:**
|
||||
- Example realism and relevance
|
||||
- Complexity progression (simple to advanced)
|
||||
- Output demonstration
|
||||
- Common use case coverage
|
||||
- Integration scenarios
|
||||
|
||||
**Scoring Guidelines:**
|
||||
- **Excellent (90-100)**: 5+ examples, perfect progression, real-world scenarios
|
||||
- **Good (75-89)**: 3-4 examples, good variety, practical scenarios
|
||||
- **Satisfactory (60-74)**: 2-3 examples, adequate coverage
|
||||
- **Poor (40-59)**: 1-2 examples, limited practical value
|
||||
- **Failing (0-39)**: No examples or completely impractical
|
||||
|
||||
## Scoring Calculations
|
||||
|
||||
### Dimension Score Calculation
|
||||
Each dimension score is calculated as a weighted average of its components:
|
||||
|
||||
```python
|
||||
def calculate_dimension_score(components):
|
||||
total_weighted_score = 0
|
||||
total_weight = 0
|
||||
|
||||
for component_name, component_data in components.items():
|
||||
score = component_data['score']
|
||||
weight = component_data['weight']
|
||||
total_weighted_score += score * weight
|
||||
total_weight += weight
|
||||
|
||||
return total_weighted_score / total_weight if total_weight > 0 else 0
|
||||
```
|
||||
|
||||
### Overall Score Calculation
|
||||
The overall score combines all dimensions with equal weighting:
|
||||
|
||||
```python
|
||||
def calculate_overall_score(dimensions):
|
||||
return sum(dimension.score * 0.25 for dimension in dimensions.values())
|
||||
```
|
||||
|
||||
### Letter Grade Assignment
|
||||
```python
|
||||
def assign_letter_grade(overall_score):
|
||||
if overall_score >= 95: return "A+"
|
||||
elif overall_score >= 90: return "A"
|
||||
elif overall_score >= 85: return "A-"
|
||||
elif overall_score >= 80: return "B+"
|
||||
elif overall_score >= 75: return "B"
|
||||
elif overall_score >= 70: return "B-"
|
||||
elif overall_score >= 65: return "C+"
|
||||
elif overall_score >= 60: return "C"
|
||||
elif overall_score >= 55: return "C-"
|
||||
elif overall_score >= 50: return "D"
|
||||
else: return "F"
|
||||
```
|
||||
|
||||
## Quality Improvement Recommendations
|
||||
|
||||
### Score-Based Recommendations
|
||||
|
||||
#### For Scores Below 60 (C- or Lower)
|
||||
**Priority Actions:**
|
||||
1. Address fundamental structural issues
|
||||
2. Implement basic error handling
|
||||
3. Add essential documentation sections
|
||||
4. Create minimal viable examples
|
||||
5. Fix critical functionality issues
|
||||
|
||||
#### For Scores 60-74 (C+ to B-)
|
||||
**Improvement Areas:**
|
||||
1. Expand documentation comprehensiveness
|
||||
2. Enhance error handling sophistication
|
||||
3. Add more diverse examples and use cases
|
||||
4. Improve code organization and structure
|
||||
5. Increase test coverage and validation
|
||||
|
||||
#### For Scores 75-84 (B to B+)
|
||||
**Enhancement Opportunities:**
|
||||
1. Refine documentation for expert-level quality
|
||||
2. Implement advanced error recovery mechanisms
|
||||
3. Add comprehensive reference materials
|
||||
4. Optimize code architecture and performance
|
||||
5. Develop extensive example library
|
||||
|
||||
#### For Scores 85+ (A- or Higher)
|
||||
**Excellence Maintenance:**
|
||||
1. Regular quality audits and updates
|
||||
2. Community feedback integration
|
||||
3. Best practice evolution tracking
|
||||
4. Mentoring lower-quality skills
|
||||
5. Innovation and cutting-edge feature adoption
|
||||
|
||||
### Dimension-Specific Improvement Strategies
|
||||
|
||||
#### Low Documentation Scores
|
||||
- Expand SKILL.md with technical details
|
||||
- Add comprehensive API reference
|
||||
- Include architecture diagrams and explanations
|
||||
- Develop troubleshooting guides
|
||||
- Create contributor documentation
|
||||
|
||||
#### Low Code Quality Scores
|
||||
- Refactor for better modularity
|
||||
- Implement comprehensive error handling
|
||||
- Add extensive code documentation
|
||||
- Apply advanced design patterns
|
||||
- Optimize performance and efficiency
|
||||
|
||||
#### Low Completeness Scores
|
||||
- Add missing directories and files
|
||||
- Develop comprehensive sample datasets
|
||||
- Create expected output libraries
|
||||
- Implement automated testing
|
||||
- Add integration examples
|
||||
|
||||
#### Low Usability Scores
|
||||
- Simplify installation process
|
||||
- Improve command-line interface design
|
||||
- Enhance help text and documentation
|
||||
- Create beginner-friendly tutorials
|
||||
- Add interactive examples
|
||||
|
||||
## Quality Assurance Process
|
||||
|
||||
### Automated Scoring
|
||||
The quality scorer runs automated assessments based on this rubric:
|
||||
1. File system analysis for structure compliance
|
||||
2. Content analysis for documentation quality
|
||||
3. Code analysis for quality metrics
|
||||
4. Asset inventory and quality assessment
|
||||
|
||||
### Manual Review Process
|
||||
Human reviewers validate automated scores and provide qualitative insights:
|
||||
1. Content quality assessment beyond automated metrics
|
||||
2. Usability testing with real-world scenarios
|
||||
3. Technical accuracy verification
|
||||
4. Community value assessment
|
||||
|
||||
### Continuous Improvement
|
||||
The scoring rubric evolves based on:
|
||||
- Community feedback and usage patterns
|
||||
- Industry best practice changes
|
||||
- Tool capability enhancements
|
||||
- Quality trend analysis
|
||||
|
||||
This quality scoring rubric ensures consistent, objective, and comprehensive assessment of all skills within the claude-skills ecosystem while providing clear guidance for quality improvement.
|
||||
@@ -0,0 +1,355 @@
|
||||
# Skill Structure Specification
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: 2026-02-16
|
||||
**Authority**: Claude Skills Engineering Team
|
||||
|
||||
## Overview
|
||||
|
||||
This document defines the mandatory and optional components that constitute a well-formed skill within the claude-skills ecosystem. All skills must adhere to these structural requirements to ensure consistency, maintainability, and quality across the repository.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
### Mandatory Components
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md # Primary skill documentation (REQUIRED)
|
||||
├── README.md # Usage instructions and quick start (REQUIRED)
|
||||
└── scripts/ # Python implementation scripts (REQUIRED)
|
||||
└── *.py # At least one Python script
|
||||
```
|
||||
|
||||
### Recommended Components
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md
|
||||
├── README.md
|
||||
├── scripts/
|
||||
│ └── *.py
|
||||
├── assets/ # Sample data and input files (RECOMMENDED)
|
||||
│ ├── samples/
|
||||
│ ├── examples/
|
||||
│ └── data/
|
||||
├── references/ # Reference documentation (RECOMMENDED)
|
||||
│ ├── api-reference.md
|
||||
│ ├── specifications.md
|
||||
│ └── external-links.md
|
||||
└── expected_outputs/ # Expected results for testing (RECOMMENDED)
|
||||
├── sample_output.json
|
||||
├── example_results.txt
|
||||
└── test_cases/
|
||||
```
|
||||
|
||||
### Optional Components
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── [mandatory and recommended components]
|
||||
├── tests/ # Unit tests and validation scripts
|
||||
├── examples/ # Extended examples and tutorials
|
||||
├── docs/ # Additional documentation
|
||||
├── config/ # Configuration files
|
||||
└── templates/ # Template files for code generation
|
||||
```
|
||||
|
||||
## File Requirements
|
||||
|
||||
### SKILL.md Requirements
|
||||
|
||||
The `SKILL.md` file serves as the primary documentation for the skill and must contain:
|
||||
|
||||
#### Mandatory YAML Frontmatter
|
||||
```yaml
|
||||
---
|
||||
Name: skill-name
|
||||
Tier: [BASIC|STANDARD|POWERFUL]
|
||||
Category: [Category Name]
|
||||
Dependencies: [None|List of dependencies]
|
||||
Author: [Author Name]
|
||||
Version: [Semantic Version]
|
||||
Last Updated: [YYYY-MM-DD]
|
||||
---
|
||||
```
|
||||
|
||||
#### Required Sections
|
||||
- **Description**: Comprehensive overview of the skill's purpose and capabilities
|
||||
- **Features**: Detailed list of key features and functionality
|
||||
- **Usage**: Instructions for using the skill and its components
|
||||
- **Examples**: Practical usage examples with expected outcomes
|
||||
|
||||
#### Recommended Sections
|
||||
- **Architecture**: Technical architecture and design decisions
|
||||
- **Installation**: Setup and installation instructions
|
||||
- **Configuration**: Configuration options and parameters
|
||||
- **Troubleshooting**: Common issues and solutions
|
||||
- **Contributing**: Guidelines for contributors
|
||||
- **Changelog**: Version history and changes
|
||||
|
||||
#### Content Requirements by Tier
|
||||
- **BASIC**: Minimum 100 lines of substantial content
|
||||
- **STANDARD**: Minimum 200 lines of substantial content
|
||||
- **POWERFUL**: Minimum 300 lines of substantial content
|
||||
|
||||
### README.md Requirements
|
||||
|
||||
The `README.md` file provides quick start instructions and must include:
|
||||
|
||||
#### Mandatory Content
|
||||
- Brief description of the skill
|
||||
- Quick start instructions
|
||||
- Basic usage examples
|
||||
- Link to full SKILL.md documentation
|
||||
|
||||
#### Recommended Content
|
||||
- Installation instructions
|
||||
- Prerequisites and dependencies
|
||||
- Command-line usage examples
|
||||
- Troubleshooting section
|
||||
- Contributing guidelines
|
||||
|
||||
#### Length Requirements
|
||||
- Minimum 200 characters of substantial content
|
||||
- Recommended 500+ characters for comprehensive coverage
|
||||
|
||||
### Scripts Directory Requirements
|
||||
|
||||
The `scripts/` directory contains all Python implementation files:
|
||||
|
||||
#### Mandatory Requirements
|
||||
- At least one Python (.py) file
|
||||
- All scripts must be executable Python 3.7+
|
||||
- No external dependencies outside Python standard library
|
||||
- Proper file naming conventions (lowercase, hyphens for separation)
|
||||
|
||||
#### Script Content Requirements
|
||||
- **Shebang line**: `#!/usr/bin/env python3`
|
||||
- **Module docstring**: Comprehensive description of script purpose
|
||||
- **Argparse implementation**: Command-line argument parsing
|
||||
- **Main guard**: `if __name__ == "__main__":` protection
|
||||
- **Error handling**: Appropriate exception handling and user feedback
|
||||
- **Dual output**: Support for both JSON and human-readable output formats
|
||||
|
||||
#### Script Size Requirements by Tier
|
||||
- **BASIC**: 100-300 lines of code per script
|
||||
- **STANDARD**: 300-500 lines of code per script
|
||||
- **POWERFUL**: 500-800 lines of code per script
|
||||
|
||||
### Assets Directory Structure
|
||||
|
||||
The `assets/` directory contains sample data and supporting files:
|
||||
|
||||
```
|
||||
assets/
|
||||
├── samples/ # Sample input data
|
||||
│ ├── simple_example.json
|
||||
│ ├── complex_dataset.csv
|
||||
│ └── test_configuration.yaml
|
||||
├── examples/ # Example files demonstrating usage
|
||||
│ ├── basic_workflow.py
|
||||
│ ├── advanced_usage.sh
|
||||
│ └── integration_example.md
|
||||
└── data/ # Static data files
|
||||
├── reference_data.json
|
||||
├── lookup_tables.csv
|
||||
└── configuration_templates/
|
||||
```
|
||||
|
||||
#### Content Requirements
|
||||
- At least 2 sample files demonstrating different use cases
|
||||
- Files should represent realistic usage scenarios
|
||||
- Include both simple and complex examples where applicable
|
||||
- Provide diverse file formats (JSON, CSV, YAML, etc.)
|
||||
|
||||
### References Directory Structure
|
||||
|
||||
The `references/` directory contains detailed reference documentation:
|
||||
|
||||
```
|
||||
references/
|
||||
├── api-reference.md # Complete API documentation
|
||||
├── specifications.md # Technical specifications and requirements
|
||||
├── external-links.md # Links to related resources
|
||||
├── algorithms.md # Algorithm descriptions and implementations
|
||||
└── best-practices.md # Usage best practices and patterns
|
||||
```
|
||||
|
||||
#### Content Requirements
|
||||
- Each file should contain substantial technical content (500+ words)
|
||||
- Include code examples and technical specifications
|
||||
- Provide external references and links where appropriate
|
||||
- Maintain consistent documentation format and style
|
||||
|
||||
### Expected Outputs Directory Structure
|
||||
|
||||
The `expected_outputs/` directory contains reference outputs for testing:
|
||||
|
||||
```
|
||||
expected_outputs/
|
||||
├── basic_example_output.json
|
||||
├── complex_scenario_result.txt
|
||||
├── error_cases/
|
||||
│ ├── invalid_input_error.json
|
||||
│ └── timeout_error.txt
|
||||
└── test_cases/
|
||||
├── unit_test_outputs/
|
||||
└── integration_test_results/
|
||||
```
|
||||
|
||||
#### Content Requirements
|
||||
- Outputs correspond to sample inputs in assets/ directory
|
||||
- Include both successful and error case examples
|
||||
- Provide outputs in multiple formats (JSON, text, CSV)
|
||||
- Ensure outputs are reproducible and verifiable
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Directory Names
|
||||
- Use lowercase letters only
|
||||
- Use hyphens (-) to separate words
|
||||
- Keep names concise but descriptive
|
||||
- Avoid special characters and spaces
|
||||
|
||||
Examples: `data-processor`, `api-client`, `ml-trainer`
|
||||
|
||||
### File Names
|
||||
- Use lowercase letters for Python scripts
|
||||
- Use hyphens (-) to separate words in script names
|
||||
- Use underscores (_) only when required by Python conventions
|
||||
- Use descriptive names that indicate purpose
|
||||
|
||||
Examples: `data-processor.py`, `api-client.py`, `quality_scorer.py`
|
||||
|
||||
### Script Internal Naming
|
||||
- Use PascalCase for class names
|
||||
- Use snake_case for function and variable names
|
||||
- Use UPPER_CASE for constants
|
||||
- Use descriptive names that indicate purpose
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Documentation Standards
|
||||
- All documentation must be written in clear, professional English
|
||||
- Use proper Markdown formatting and structure
|
||||
- Include code examples with syntax highlighting
|
||||
- Provide comprehensive coverage of all features
|
||||
- Maintain consistent terminology throughout
|
||||
|
||||
### Code Standards
|
||||
- Follow PEP 8 Python style guidelines
|
||||
- Include comprehensive docstrings for all functions and classes
|
||||
- Implement proper error handling with meaningful error messages
|
||||
- Use type hints where appropriate
|
||||
- Maintain reasonable code complexity and readability
|
||||
|
||||
### Testing Standards
|
||||
- Provide sample data that exercises all major functionality
|
||||
- Include expected outputs for verification
|
||||
- Cover both successful and error scenarios
|
||||
- Ensure reproducible results across different environments
|
||||
|
||||
## Validation Criteria
|
||||
|
||||
Skills are validated against the following criteria:
|
||||
|
||||
### Structural Validation
|
||||
- All mandatory files and directories present
|
||||
- Proper file naming conventions followed
|
||||
- Directory structure matches specification
|
||||
- File permissions and accessibility correct
|
||||
|
||||
### Content Validation
|
||||
- SKILL.md meets minimum length and section requirements
|
||||
- README.md provides adequate quick start information
|
||||
- Scripts contain required components (argparse, main guard, etc.)
|
||||
- Sample data and expected outputs are complete and realistic
|
||||
|
||||
### Quality Validation
|
||||
- Documentation is comprehensive and accurate
|
||||
- Code follows established style and quality guidelines
|
||||
- Examples are practical and demonstrate real usage
|
||||
- Error handling is appropriate and user-friendly
|
||||
|
||||
## Compliance Levels
|
||||
|
||||
### Full Compliance
|
||||
- All mandatory components present and complete
|
||||
- All recommended components present with substantial content
|
||||
- Exceeds minimum quality thresholds for tier
|
||||
- Demonstrates best practices throughout
|
||||
|
||||
### Partial Compliance
|
||||
- All mandatory components present
|
||||
- Most recommended components present
|
||||
- Meets minimum quality thresholds for tier
|
||||
- Generally follows established patterns
|
||||
|
||||
### Non-Compliance
|
||||
- Missing mandatory components
|
||||
- Inadequate content quality or length
|
||||
- Does not meet minimum tier requirements
|
||||
- Significant deviations from established standards
|
||||
|
||||
## Migration and Updates
|
||||
|
||||
### Existing Skills
|
||||
Skills created before this specification should be updated to comply within:
|
||||
- **POWERFUL tier**: 30 days
|
||||
- **STANDARD tier**: 60 days
|
||||
- **BASIC tier**: 90 days
|
||||
|
||||
### Specification Updates
|
||||
- Changes to this specification require team consensus
|
||||
- Breaking changes must provide 90-day migration period
|
||||
- All changes must be documented with rationale and examples
|
||||
- Automated validation tools must be updated accordingly
|
||||
|
||||
## Tools and Automation
|
||||
|
||||
### Validation Tools
|
||||
- `skill_validator.py` - Validates structure and content compliance
|
||||
- `script_tester.py` - Tests script functionality and quality
|
||||
- `quality_scorer.py` - Provides comprehensive quality assessment
|
||||
|
||||
### Integration Points
|
||||
- Pre-commit hooks for basic validation
|
||||
- CI/CD pipeline integration for pull request validation
|
||||
- Automated quality reporting and tracking
|
||||
- Integration with code review processes
|
||||
|
||||
## Examples and Templates
|
||||
|
||||
### Minimal BASIC Tier Example
|
||||
```
|
||||
basic-skill/
|
||||
├── SKILL.md # 100+ lines
|
||||
├── README.md # Basic usage instructions
|
||||
└── scripts/
|
||||
└── main.py # 100-300 lines with argparse
|
||||
```
|
||||
|
||||
### Complete POWERFUL Tier Example
|
||||
```
|
||||
powerful-skill/
|
||||
├── SKILL.md # 300+ lines with comprehensive sections
|
||||
├── README.md # Detailed usage and setup
|
||||
├── scripts/ # Multiple sophisticated scripts
|
||||
│ ├── main_processor.py # 500-800 lines
|
||||
│ ├── data_analyzer.py # 500-800 lines
|
||||
│ └── report_generator.py # 500-800 lines
|
||||
├── assets/ # Diverse sample data
|
||||
│ ├── samples/
|
||||
│ ├── examples/
|
||||
│ └── data/
|
||||
├── references/ # Comprehensive documentation
|
||||
│ ├── api-reference.md
|
||||
│ ├── specifications.md
|
||||
│ └── best-practices.md
|
||||
└── expected_outputs/ # Complete test outputs
|
||||
├── json_outputs/
|
||||
├── text_reports/
|
||||
└── error_cases/
|
||||
```
|
||||
|
||||
This specification serves as the authoritative guide for skill structure within the claude-skills ecosystem. Adherence to these standards ensures consistency, quality, and maintainability across all skills in the repository.
|
||||
@@ -0,0 +1,375 @@
|
||||
# Tier Requirements Matrix
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: 2026-02-16
|
||||
**Authority**: Claude Skills Engineering Team
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides a comprehensive matrix of requirements for each skill tier within the claude-skills ecosystem. Skills are classified into three tiers based on complexity, functionality, and comprehensiveness: BASIC, STANDARD, and POWERFUL.
|
||||
|
||||
## Tier Classification Philosophy
|
||||
|
||||
### BASIC Tier
|
||||
Entry-level skills that provide fundamental functionality with minimal complexity. Suitable for simple automation tasks, basic data processing, or straightforward utilities.
|
||||
|
||||
### STANDARD Tier
|
||||
Intermediate skills that offer enhanced functionality with moderate complexity. Suitable for business processes, advanced data manipulation, or multi-step workflows.
|
||||
|
||||
### POWERFUL Tier
|
||||
Advanced skills that provide comprehensive functionality with sophisticated implementation. Suitable for complex systems, enterprise-grade tools, or mission-critical applications.
|
||||
|
||||
## Requirements Matrix
|
||||
|
||||
| Component | BASIC | STANDARD | POWERFUL |
|
||||
|-----------|-------|----------|----------|
|
||||
| **SKILL.md Lines** | ≥100 | ≥200 | ≥300 |
|
||||
| **Scripts Count** | ≥1 | ≥1 | ≥2 |
|
||||
| **Script Size (LOC)** | 100-300 | 300-500 | 500-800 |
|
||||
| **Required Directories** | scripts | scripts, assets, references | scripts, assets, references, expected_outputs |
|
||||
| **Argparse Implementation** | Basic | Advanced | Complex with subcommands |
|
||||
| **Output Formats** | Human-readable | JSON + Human-readable | JSON + Human-readable + Custom |
|
||||
| **Error Handling** | Basic | Comprehensive | Advanced with recovery |
|
||||
| **Documentation Depth** | Functional | Comprehensive | Expert-level |
|
||||
| **Examples Provided** | ≥1 | ≥3 | ≥5 |
|
||||
| **Test Coverage** | Basic validation | Sample data testing | Comprehensive test suite |
|
||||
|
||||
## Detailed Requirements by Tier
|
||||
|
||||
### BASIC Tier Requirements
|
||||
|
||||
#### Documentation Requirements
|
||||
- **SKILL.md**: Minimum 100 lines of substantial content
|
||||
- **Required Sections**: Name, Description, Features, Usage, Examples
|
||||
- **README.md**: Basic usage instructions (200+ characters)
|
||||
- **Content Quality**: Clear and functional documentation
|
||||
- **Examples**: At least 1 practical usage example
|
||||
|
||||
#### Code Requirements
|
||||
- **Scripts**: Minimum 1 Python script (100-300 LOC)
|
||||
- **Argparse**: Basic command-line argument parsing
|
||||
- **Main Guard**: `if __name__ == "__main__":` protection
|
||||
- **Dependencies**: Python standard library only
|
||||
- **Output**: Human-readable format with clear messaging
|
||||
- **Error Handling**: Basic exception handling with user-friendly messages
|
||||
|
||||
#### Structure Requirements
|
||||
- **Mandatory Directories**: `scripts/`
|
||||
- **Recommended Directories**: `assets/`, `references/`
|
||||
- **File Organization**: Logical file naming and structure
|
||||
- **Assets**: Optional sample data files
|
||||
|
||||
#### Quality Standards
|
||||
- **Code Style**: Follows basic Python conventions
|
||||
- **Documentation**: Adequate coverage of functionality
|
||||
- **Usability**: Clear usage instructions and examples
|
||||
- **Completeness**: All essential components present
|
||||
|
||||
### STANDARD Tier Requirements
|
||||
|
||||
#### Documentation Requirements
|
||||
- **SKILL.md**: Minimum 200 lines with comprehensive coverage
|
||||
- **Required Sections**: All BASIC sections plus Architecture, Installation
|
||||
- **README.md**: Detailed usage instructions (500+ characters)
|
||||
- **References**: Technical documentation in `references/` directory
|
||||
- **Content Quality**: Professional-grade documentation with technical depth
|
||||
- **Examples**: At least 3 diverse usage examples
|
||||
|
||||
#### Code Requirements
|
||||
- **Scripts**: 1-2 Python scripts (300-500 LOC each)
|
||||
- **Argparse**: Advanced argument parsing with subcommands and validation
|
||||
- **Output Formats**: Both JSON and human-readable output support
|
||||
- **Error Handling**: Comprehensive exception handling with specific error types
|
||||
- **Code Structure**: Well-organized classes and functions
|
||||
- **Documentation**: Comprehensive docstrings for all functions
|
||||
|
||||
#### Structure Requirements
|
||||
- **Mandatory Directories**: `scripts/`, `assets/`, `references/`
|
||||
- **Recommended Directories**: `expected_outputs/`
|
||||
- **Assets**: Multiple sample files demonstrating different use cases
|
||||
- **References**: Technical specifications and API documentation
|
||||
- **Expected Outputs**: Sample results for validation
|
||||
|
||||
#### Quality Standards
|
||||
- **Code Quality**: Advanced Python patterns and best practices
|
||||
- **Documentation**: Expert-level technical documentation
|
||||
- **Testing**: Sample data processing with validation
|
||||
- **Integration**: Consideration for CI/CD and automation use
|
||||
|
||||
### POWERFUL Tier Requirements
|
||||
|
||||
#### Documentation Requirements
|
||||
- **SKILL.md**: Minimum 300 lines with expert-level comprehensiveness
|
||||
- **Required Sections**: All STANDARD sections plus Troubleshooting, Contributing, Advanced Usage
|
||||
- **README.md**: Comprehensive guide with installation and setup (1000+ characters)
|
||||
- **References**: Multiple technical documents with specifications
|
||||
- **Content Quality**: Publication-ready documentation with architectural details
|
||||
- **Examples**: At least 5 examples covering simple to complex scenarios
|
||||
|
||||
#### Code Requirements
|
||||
- **Scripts**: 2-3 Python scripts (500-800 LOC each)
|
||||
- **Argparse**: Complex argument parsing with multiple modes and configurations
|
||||
- **Output Formats**: JSON, human-readable, and custom format support
|
||||
- **Error Handling**: Advanced error handling with recovery mechanisms
|
||||
- **Code Architecture**: Sophisticated design patterns and modular structure
|
||||
- **Performance**: Optimized for efficiency and scalability
|
||||
|
||||
#### Structure Requirements
|
||||
- **Mandatory Directories**: `scripts/`, `assets/`, `references/`, `expected_outputs/`
|
||||
- **Optional Directories**: `tests/`, `examples/`, `docs/`
|
||||
- **Assets**: Comprehensive sample data covering edge cases
|
||||
- **References**: Complete technical specification suite
|
||||
- **Expected Outputs**: Full test result coverage including error cases
|
||||
- **Testing**: Comprehensive validation and test coverage
|
||||
|
||||
#### Quality Standards
|
||||
- **Enterprise Grade**: Production-ready code with enterprise patterns
|
||||
- **Documentation**: Comprehensive technical documentation suitable for technical teams
|
||||
- **Integration**: Full CI/CD integration capabilities
|
||||
- **Maintainability**: Designed for long-term maintenance and extension
|
||||
|
||||
## Tier Assessment Criteria
|
||||
|
||||
### Automatic Tier Classification
|
||||
Skills are automatically classified based on quantitative metrics:
|
||||
|
||||
```python
|
||||
def classify_tier(skill_metrics):
|
||||
if (skill_metrics['skill_md_lines'] >= 300 and
|
||||
skill_metrics['script_count'] >= 2 and
|
||||
skill_metrics['min_script_size'] >= 500 and
|
||||
all_required_dirs_present(['scripts', 'assets', 'references', 'expected_outputs'])):
|
||||
return 'POWERFUL'
|
||||
|
||||
elif (skill_metrics['skill_md_lines'] >= 200 and
|
||||
skill_metrics['script_count'] >= 1 and
|
||||
skill_metrics['min_script_size'] >= 300 and
|
||||
all_required_dirs_present(['scripts', 'assets', 'references'])):
|
||||
return 'STANDARD'
|
||||
|
||||
else:
|
||||
return 'BASIC'
|
||||
```
|
||||
|
||||
### Manual Tier Override
|
||||
Manual tier assignment may be considered when:
|
||||
- Skill provides exceptional value despite not meeting all quantitative requirements
|
||||
- Skill addresses critical infrastructure or security needs
|
||||
- Skill demonstrates innovative approaches or cutting-edge techniques
|
||||
- Skill provides essential integration or compatibility functions
|
||||
|
||||
### Tier Promotion Criteria
|
||||
Skills may be promoted to higher tiers when:
|
||||
- All quantitative requirements for higher tier are met
|
||||
- Quality assessment scores exceed tier thresholds
|
||||
- Community usage and feedback indicate higher value
|
||||
- Continuous integration and maintenance demonstrate reliability
|
||||
|
||||
### Tier Demotion Criteria
|
||||
Skills may be demoted to lower tiers when:
|
||||
- Quality degradation below tier standards
|
||||
- Lack of maintenance or updates
|
||||
- Compatibility issues or security vulnerabilities
|
||||
- Community feedback indicates reduced value
|
||||
|
||||
## Implementation Guidelines by Tier
|
||||
|
||||
### BASIC Tier Implementation
|
||||
```python
|
||||
# Example argparse implementation for BASIC tier
|
||||
parser = argparse.ArgumentParser(description="Basic skill functionality")
|
||||
parser.add_argument("input", help="Input file or parameter")
|
||||
parser.add_argument("--output", help="Output destination")
|
||||
parser.add_argument("--verbose", action="store_true", help="Verbose output")
|
||||
|
||||
# Basic error handling
|
||||
try:
|
||||
result = process_input(args.input)
|
||||
print(f"Processing completed: {result}")
|
||||
except FileNotFoundError:
|
||||
print("Error: Input file not found")
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
print(f"Error: {str(e)}")
|
||||
sys.exit(1)
|
||||
```
|
||||
|
||||
### STANDARD Tier Implementation
|
||||
```python
|
||||
# Example argparse implementation for STANDARD tier
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Standard skill with advanced functionality",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="Examples:\n python script.py input.json --format json\n python script.py data/ --batch --output results/"
|
||||
)
|
||||
parser.add_argument("input", help="Input file or directory")
|
||||
parser.add_argument("--format", choices=["json", "text"], default="json", help="Output format")
|
||||
parser.add_argument("--batch", action="store_true", help="Process multiple files")
|
||||
parser.add_argument("--output", help="Output destination")
|
||||
|
||||
# Advanced error handling with specific exception types
|
||||
try:
|
||||
if args.batch:
|
||||
results = batch_process(args.input)
|
||||
else:
|
||||
results = single_process(args.input)
|
||||
|
||||
if args.format == "json":
|
||||
print(json.dumps(results, indent=2))
|
||||
else:
|
||||
print_human_readable(results)
|
||||
|
||||
except FileNotFoundError as e:
|
||||
logging.error(f"File not found: {e}")
|
||||
sys.exit(1)
|
||||
except ValueError as e:
|
||||
logging.error(f"Invalid input: {e}")
|
||||
sys.exit(2)
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error: {e}")
|
||||
sys.exit(1)
|
||||
```
|
||||
|
||||
### POWERFUL Tier Implementation
|
||||
```python
|
||||
# Example argparse implementation for POWERFUL tier
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Powerful skill with comprehensive functionality",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
Basic usage:
|
||||
python script.py process input.json --output results/
|
||||
|
||||
Advanced batch processing:
|
||||
python script.py batch data/ --format json --parallel 4 --filter "*.csv"
|
||||
|
||||
Custom configuration:
|
||||
python script.py process input.json --config custom.yaml --dry-run
|
||||
"""
|
||||
)
|
||||
|
||||
subparsers = parser.add_subparsers(dest="command", help="Available commands")
|
||||
|
||||
# Process subcommand
|
||||
process_parser = subparsers.add_parser("process", help="Process single file")
|
||||
process_parser.add_argument("input", help="Input file path")
|
||||
process_parser.add_argument("--config", help="Configuration file")
|
||||
process_parser.add_argument("--dry-run", action="store_true", help="Show what would be done")
|
||||
|
||||
# Batch subcommand
|
||||
batch_parser = subparsers.add_parser("batch", help="Process multiple files")
|
||||
batch_parser.add_argument("directory", help="Input directory")
|
||||
batch_parser.add_argument("--parallel", type=int, default=1, help="Number of parallel processes")
|
||||
batch_parser.add_argument("--filter", help="File filter pattern")
|
||||
|
||||
# Comprehensive error handling with recovery
|
||||
try:
|
||||
if args.command == "process":
|
||||
result = process_with_recovery(args.input, args.config, args.dry_run)
|
||||
elif args.command == "batch":
|
||||
result = batch_process_with_monitoring(args.directory, args.parallel, args.filter)
|
||||
else:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
# Multiple output format support
|
||||
output_formatter = OutputFormatter(args.format)
|
||||
output_formatter.write(result, args.output)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logging.info("Processing interrupted by user")
|
||||
sys.exit(130)
|
||||
except ProcessingError as e:
|
||||
logging.error(f"Processing failed: {e}")
|
||||
if e.recoverable:
|
||||
logging.info("Attempting recovery...")
|
||||
# Recovery logic here
|
||||
sys.exit(1)
|
||||
except ValidationError as e:
|
||||
logging.error(f"Validation failed: {e}")
|
||||
logging.info("Check input format and try again")
|
||||
sys.exit(2)
|
||||
except Exception as e:
|
||||
logging.critical(f"Critical error: {e}")
|
||||
logging.info("Please report this issue")
|
||||
sys.exit(1)
|
||||
```
|
||||
|
||||
## Quality Scoring by Tier
|
||||
|
||||
### Scoring Thresholds
|
||||
- **POWERFUL Tier**: Overall score ≥80, all dimensions ≥75
|
||||
- **STANDARD Tier**: Overall score ≥70, 3+ dimensions ≥65
|
||||
- **BASIC Tier**: Overall score ≥60, meets minimum requirements
|
||||
|
||||
### Dimension Weights (All Tiers)
|
||||
- **Documentation**: 25%
|
||||
- **Code Quality**: 25%
|
||||
- **Completeness**: 25%
|
||||
- **Usability**: 25%
|
||||
|
||||
### Tier-Specific Quality Expectations
|
||||
|
||||
#### BASIC Tier Quality Profile
|
||||
- Documentation: Functional and clear (60+ points expected)
|
||||
- Code Quality: Clean and maintainable (60+ points expected)
|
||||
- Completeness: Essential components present (60+ points expected)
|
||||
- Usability: Easy to understand and use (60+ points expected)
|
||||
|
||||
#### STANDARD Tier Quality Profile
|
||||
- Documentation: Professional and comprehensive (70+ points expected)
|
||||
- Code Quality: Advanced patterns and best practices (70+ points expected)
|
||||
- Completeness: All recommended components (70+ points expected)
|
||||
- Usability: Well-designed user experience (70+ points expected)
|
||||
|
||||
#### POWERFUL Tier Quality Profile
|
||||
- Documentation: Expert-level and publication-ready (80+ points expected)
|
||||
- Code Quality: Enterprise-grade implementation (80+ points expected)
|
||||
- Completeness: Comprehensive test and validation coverage (80+ points expected)
|
||||
- Usability: Exceptional user experience with extensive help (80+ points expected)
|
||||
|
||||
## Tier Migration Process
|
||||
|
||||
### Promotion Process
|
||||
1. **Assessment**: Quality scorer evaluates skill against higher tier requirements
|
||||
2. **Review**: Engineering team reviews assessment and implementation
|
||||
3. **Testing**: Comprehensive testing against higher tier standards
|
||||
4. **Approval**: Team consensus on tier promotion
|
||||
5. **Update**: Skill metadata and documentation updated to reflect new tier
|
||||
|
||||
### Demotion Process
|
||||
1. **Issue Identification**: Quality degradation or standards violation identified
|
||||
2. **Assessment**: Current quality evaluated against tier requirements
|
||||
3. **Notice**: Skill maintainer notified of potential demotion
|
||||
4. **Grace Period**: 30-day period for remediation
|
||||
5. **Final Review**: Re-assessment after grace period
|
||||
6. **Action**: Tier adjustment or removal if standards not met
|
||||
|
||||
### Tier Change Communication
|
||||
- All tier changes logged in skill CHANGELOG.md
|
||||
- Repository-level tier change notifications
|
||||
- Integration with CI/CD systems for automated handling
|
||||
- Community notifications for significant changes
|
||||
|
||||
## Compliance Monitoring
|
||||
|
||||
### Automated Monitoring
|
||||
- Daily quality assessment scans
|
||||
- Tier compliance validation in CI/CD
|
||||
- Automated reporting of tier violations
|
||||
- Integration with code review processes
|
||||
|
||||
### Manual Review Process
|
||||
- Quarterly tier review cycles
|
||||
- Community feedback integration
|
||||
- Expert panel reviews for complex cases
|
||||
- Appeals process for tier disputes
|
||||
|
||||
### Enforcement Actions
|
||||
- **Warning**: First violation or minor issues
|
||||
- **Probation**: Repeated violations or moderate issues
|
||||
- **Demotion**: Serious violations or quality degradation
|
||||
- **Removal**: Critical violations or abandonment
|
||||
|
||||
This tier requirements matrix serves as the definitive guide for skill classification and quality standards within the claude-skills ecosystem. Regular updates ensure alignment with evolving best practices and community needs.
|
||||
Reference in New Issue
Block a user