Files
Oleksandr Bezdieniezhnykh b0a03d36d6 Add .cursor AI autodevelopment harness (agents, skills, rules)
Made-with: Cursor
2026-03-26 01:06:55 +02:00

6.1 KiB

Source Tiering & Authority Anchoring — Reference

Source Tiers

Tier Source Type Purpose Credibility
L1 Official docs, papers, specs, RFCs Definitions, mechanisms, verifiable facts High
L2 Official blogs, tech talks, white papers Design intent, architectural thinking High
L3 Authoritative media, expert commentary, tutorials Supplementary intuition, case studies Medium
L4 Community discussions, personal blogs, forums Discover blind spots, validate understanding Low

L4 Community Source Specifics (mandatory for product comparison research)

Source Type Access Method Value
GitHub Issues Visit github.com/<org>/<repo>/issues Real user pain points, feature requests, bug reports
GitHub Discussions Visit github.com/<org>/<repo>/discussions Feature discussions, usage insights, community consensus
Reddit Search site:reddit.com "<product_name>" Authentic user reviews, comparison discussions
Hacker News Search site:news.ycombinator.com "<product_name>" In-depth technical community discussions
Discord/Telegram Product's official community channels Active user feedback (must annotate [limited source])

Principles

  • Conclusions must be traceable to L1/L2
  • L3/L4 serve only as supplementary and validation
  • L4 community discussions are used to discover "what users truly care about"
  • Record all information sources
  • Search broadly before searching deeply — cast a wide net with multiple query variants before diving deep into any single source
  • Cross-domain search — when direct results are sparse, search adjacent fields, analogous problems, and related industries
  • Never rely on a single search — each sub-question requires multiple searches from different angles (synonyms, negations, practitioner language, academic language)

Timeliness Filtering Rules (execute based on Step 0.5 sensitivity level)

Sensitivity Level Source Filtering Rule Suggested Search Parameters
Critical Only accept sources within 6 months as factual evidence time_range: "month" or start_date set to last 3 months
High Prefer sources within 1 year; annotate if older than 1 year time_range: "year"
Medium Sources within 2 years used normally; older ones need validity check Default search
Low No time limit Default search

High-Sensitivity Domain Search Strategy

1. Round 1: Targeted official source search
   - Use include_domains to restrict to official domains
   - Example: include_domains: ["anthropic.com", "openai.com", "docs.xxx.com"]

2. Round 2: Official download/release page direct verification (BLOCKING)
   - Directly visit official download pages; don't rely on search caches
   - Use tavily-extract or WebFetch to extract page content
   - Verify: platform support, current version number, release date

3. Round 3: Product-specific protocol/feature search (BLOCKING)
   - Search protocol names the product supports (MCP, ACP, LSP, etc.)
   - Format: "<product_name> <protocol_name>" site:official_domain

4. Round 4: Time-limited broad search
   - time_range: "month" or start_date set to recent
   - Exclude obviously outdated sources

5. Round 5: Version verification
   - Cross-validate version numbers from search results
   - If inconsistency found, immediately consult official Changelog

6. Round 6: Community voice mining (BLOCKING - mandatory for product comparison research)
   - Visit the product's GitHub Issues page, review popular/pinned issues
   - Search Issues for key feature terms (e.g., "MCP", "plugin", "integration")
   - Review discussion trends from the last 3-6 months
   - Identify the feature points and differentiating characteristics users care most about

Community Voice Mining Detailed Steps

GitHub Issues Mining Steps:
1. Visit github.com/<org>/<repo>/issues
2. Sort by "Most commented" to view popular discussions
3. Search keywords:
   - Feature-related: feature request, enhancement, MCP, plugin, API
   - Comparison-related: vs, compared to, alternative, migrate from
4. Review issue labels: enhancement, feature, discussion
5. Record frequently occurring feature demands and user pain points

Value Translation:
- Frequently discussed features -> likely differentiating highlights
- User complaints/requests -> likely product weaknesses
- Comparison discussions -> directly obtain user-perspective difference analysis

Source Registry Entry Template

For each source consulted, immediately append to 01_source_registry.md:

## Source #[number]
- **Title**: [source title]
- **Link**: [URL]
- **Tier**: L1/L2/L3/L4
- **Publication Date**: [YYYY-MM-DD]
- **Timeliness Status**: Currently valid / Needs verification / Outdated (reference only)
- **Version Info**: [If involving a specific version, must annotate]
- **Target Audience**: [Explicitly annotate the group/geography/level this source targets]
- **Research Boundary Match**: Full match / Partial overlap / Reference only
- **Summary**: [1-2 sentence key content]
- **Related Sub-question**: [which sub-question this corresponds to]

Target Audience Verification (BLOCKING)

Before including each source, verify that its target audience matches the research boundary:

Source Type Target audience to verify Verification method
Policy/Regulation Who is it for? (K-12/university/all) Check document title, scope clauses
Academic Research Who are the subjects? (vocational/undergraduate/graduate) Check methodology/sample description sections
Statistical Data Which population is measured? Check data source description
Case Reports What type of institution is involved? Confirm institution type

Handling mismatched sources:

  • Target audience completely mismatched -> do not include
  • Partially overlapping -> include but annotate applicable scope
  • Usable as analogous reference -> include but explicitly annotate "reference only"