Research Skill — Design Spec
Section titled “Research Skill — Design Spec”Date: 2026-03-13 Status: Approved Version: 1.4 (Claude-only; multi-AI synthesis deferred to v2)
Overview
Section titled “Overview”A /research skill that produces comprehensive, cited research reports from a structured markdown research task file. The user fills out a template in Obsidian, invokes the skill pointing to that file, and receives a finished report appended to the same file. All findings are grounded in sources the agent actually accessed during the session — never from training memory alone.
- Systematize and accelerate ad-hoc research sessions into a repeatable, archivable workflow
- Fit naturally into the existing KB task management system (task file + output in one file, stored in
_WorkingOn/Research/, archived to09_Logs/when done) - Minimize friction: defaults pre-selected, zero confirmation gates after invocation, then autonomous execution
- Produce high-quality output: every factual claim supported by a verified citation (when sources enabled), minimum 600-word report body (measured across all adaptive sections + Executive Summary + Next Steps, excluding Sources), adaptive structure, executive summary + actionable next steps
- Never speculate or hallucinate: findings are limited to what was verifiably retrieved in this session; confidence is always visible
Invocation
Section titled “Invocation”/research /mnt/d/FSS/KB/Business/_WorkingOn/Research/2026-03-13_Topic.mdIf no path is provided, the skill auto-detects the most recently modified .md file in the top level of the Research folder only — the scan uses a non-recursive glob (Research/*.md) and never descends into subdirectories. Files whose names begin with _ are excluded. Files without a YYYY-MM-DD_ prefix are eligible provided they don’t start with _. If two files share the same modification timestamp, the tiebreaker is alphabetical filename order (descending — the filename that sorts last wins, which for YYYY-MM-DD_ files selects the most recent date). If no eligible file is found at the top level, the skill asks the user to provide a path.
/researchFile & Folder Structure
Section titled “File & Folder Structure”D:\FSS\KB\Business\_WorkingOn\Research\├── _Research-Template.md ← master template (never edit; _ prefix excludes from auto-detect)├── 2026-03-13_Topic-Name.md ← active research task file → becomes full report└── ...When research is complete, the user manually moves the file to 09_Logs\ or another archive location. The research task file and report travel together as one self-contained artifact.
The Research Template
Section titled “The Research Template”Stored at _WorkingOn/Research/_Research-Template.md. Users copy this file, rename it YYYY-MM-DD_Topic.md, and fill in the Topic and Context fields. All parameters have sensible defaults pre-selected.
The Output Path field is omitted in v1 — output always appends to the research task file itself.
# Research Task
## Topic<!-- Describe what you want researched. Be as specific or broad as you like. -->
## Parameters
**Geography**- [x] Global- [ ] Specify: ___________
**Depth**- [ ] Moderate (fast scan, key facts, 1–2 search passes)- [x] Deep (comprehensive, parallel searches, cited sources)
**Goal** (choose one or more)- [x] Understand the landscape- [ ] Assess risks- [ ] Identify opportunities- [ ] Decision support
**Sources**- [x] Cited in report- [ ] No citations needed
## Additional Context<!-- Describe your role, a decision you're making, prior knowledge, specific angles to cover or avoid. Leave blank to let the skill determine the best angles. -->Skill Workflow
Section titled “Skill Workflow”1. Read the research task file2. Check for existing report: search the full file content for the exact byte sequence `\n\n---\n\n# Research Report` (two newlines, `---`, two newlines, then the heading). This is the precise string the skill inserts in step 7 — it cannot appear in user-authored content without deliberate effort. A `# Research Report` heading elsewhere in the file (e.g., in the user's topic), or a `---` separator in user content, does NOT trigger this guard unless followed immediately by the rest of the exact sequence. → If found: stop and output "Report already exists in [filename]. Delete the '# Research Report' section (everything from --- onward) to re-run, or create a new research task file."3. Parse all parameters (topic, geography, depth, goal, sources, context)4. Check for ambiguity in this priority order: a. Topic empty → ask: "What topic should I research?" (wait for reply before continuing) b. No Goal checked → default silently to "Understand the landscape" c. No Depth checked → default silently to "Deep" (All other missing/blank fields use defaults — no questions asked)5. Display status line — shown immediately before dispatching the agent (i.e., after step 4a if the topic was empty, or immediately after step 4 if it was not). The status line is never shown before the topic is known. Informational only; skill proceeds immediately with no further user action required: "Researching: [topic] · [geography] · [depth] · Goal: [goal(s)] → appending to [filename]"6. Dispatch research agent7. Append separator + report unconditionally: Always append "\n\n---\n\n" followed by the report body. The research template does not end with "---"; the skill inserts the separator on every run. (The template's last content is the ## Additional Context section — no trailing separator.) (The re-run guard in step 2 ensures this path is only reached once per research task file.)8. Confirm in chat: "Done. Report appended to [path]"Accuracy & Credibility Rules (Never Guess)
Section titled “Accuracy & Credibility Rules (Never Guess)”These rules are non-negotiable and apply to every research agent invocation. They exist because AI systems can produce plausible-sounding but fabricated content — particularly when summarizing books, papers, or other works they have not actually read.
The Never Guess Rule
Section titled “The Never Guess Rule”-
Only cite sources retrieved in this session. Every citation must come from a URL found via WebSearch or content fetched via WebFetch during this run. Never cite from training memory alone.
-
Never summarize what you haven’t read. If a source (book, paper, proprietary document) cannot be fetched, report only what the search snippet or abstract says — and label it
[Partial]. Do not reconstruct content from training knowledge. -
State inaccessibility explicitly. If asked about a specific work that cannot be fetched: “Full text not accessible — summary omitted to avoid speculation.” This is the correct response. Silence or fabrication are not acceptable alternatives.
-
Training knowledge may frame context and generate search queries — it may not serve as a citable research finding.
-
When in doubt, omit. An incomplete report with honest gaps is more valuable than a complete report with fabricated content.
Source Labels
Section titled “Source Labels”Every entry in the Sources section carries a label indicating how the agent accessed it:
| Label | Meaning |
|---|---|
[Primary] | Official document, legislation, regulation, peer-reviewed paper — full text fetched |
[Secondary] | Established news outlet, professional publication, industry report — full text fetched |
[Partial] | Only search snippet or abstract accessed — full text not retrieved |
[Unverified] | Source located but could not be fetched; cited for reference only |
Section Confidence Ratings
Section titled “Section Confidence Ratings”Confidence ratings are active only when “Cited in report” is selected. When “No citations needed” is selected, confidence rating lines are suppressed from all sections.
Each adaptive section opens with a one-line confidence rating assigned by the following deterministic rules (applied to the sources used for that specific section, not the report overall). For threshold purposes, [Unverified] is treated identically to [Partial] — both are non-full-text sources.
| Rating | Assignment rule |
|---|---|
Confidence: High | All sources are [Primary] or [Secondary] (full text fetched); zero [Partial]/[Unverified] |
Confidence: Medium | At least one [Primary] or [Secondary] source with full text fetched; [Partial]/[Unverified] are not a strict majority (i.e., ≤50% of sources). In a tie (e.g., 2 full-text and 2 Partial/Unverified), assign Medium. |
Confidence: Low | Strict majority of sources (>50%) are [Partial] or [Unverified]; fewer than 2 full-text sources |
⚠ Speculative | Zero full-text sources — every source used is [Partial] or [Unverified] |
The Executive Summary carries the lowest confidence rating of any adaptive section in the report. Derivation: take the minimum (worst) rating across all adaptive sections and apply it to the Executive Summary. If all adaptive sections are High, the Executive Summary is High; if any section is Speculative, the Executive Summary is Speculative.
Research Agent Design
Section titled “Research Agent Design”The skill dispatches a single well-prompted research agent using the parallel web search pattern:
- Decompose — identify 4–6 sub-topics or angles from the brief
- Search — run parallel WebSearch queries across all sub-topics simultaneously
- Fetch — follow the most authoritative sources with WebFetch for depth
- Assess — label each source and determine section confidence ratings
- Synthesize — assemble findings into the adaptive report structure, applying all Never Guess rules. The report body (Executive Summary + all adaptive sections + Next Steps) must reach a minimum of 600 words. If the agent’s initial synthesis falls short, it must expand existing sections with additional verified detail — never pad with filler or fabricate content to meet the target. Never Guess takes precedence over the 600-word target: if 600 words cannot be reached from retrieved sources alone, the agent writes what it can and notes the gap explicitly rather than speculating.
Depth setting affects agent behavior:
| Setting | Sub-topics | Search passes | WebFetch | Citations |
|---|---|---|---|---|
| Moderate | 2–3 | 1 per topic | Light (top 2 sources) | If selected |
| Deep | 4–6 | Parallel, simultaneous | Full (all key sources) | If selected |
Report Structure
Section titled “Report Structure”The report is appended to the research task file after a single --- horizontal rule (the only --- in the file — used as the task/report boundary). Within the report, sections are separated by ## headings only — no additional horizontal rules — to keep the boundary unambiguous and support clean Obsidian navigation.
Fixed sections (always present, in this order)
Section titled “Fixed sections (always present, in this order)”- Executive Summary — always first; 3–5 sentences; carries the report’s lowest confidence rating. When “Cited in report” is selected, opens with
*Confidence: [rating]*; when “No citations needed” is selected, the confidence rating line is omitted here as in all other sections. - [Adaptive sections] — middle; each opens with a
*Confidence: [rating]*line (omitted when “No citations needed” is selected) - Next Steps — always last before Sources; actionable, specific; no confidence rating line
- Sources — present only if
Citedis checked; each entry labeled with source type
Adaptive sections by goal
Section titled “Adaptive sections by goal”| Goal | Sections (in order) |
|---|---|
| Understand the landscape | Landscape Overview · Key Players & Entities · How It Works · Trends & Developments |
| Assess risks | Risk Landscape · Regulatory & Legal · Financial · Operational · Mitigation Options |
| Identify opportunities | Market Gaps · Entry Points · Competitive Dynamics · Strategic Options |
| Decision support | Context & Background · Options Analysis · Trade-offs · Recommendation |
| Multiple goals | Emit all sections for each selected goal in the fixed cross-goal order: Understand → Risks → Opportunities → Decision. Within each selected goal, sections follow that goal’s defined intra-goal order. Goals that are not selected contribute no sections. Example: if only “Assess risks” and “Identify opportunities” are selected, the output contains the 5 Risk sections followed by the 4 Opportunity sections — no Understand or Decision sections appear. No deduplication: all sections from each selected goal are emitted even if conceptually similar to sections from another goal. |
Citation format
Section titled “Citation format”When Sources are enabled:
- Inline: Place
[n]immediately after the sentence or claim being cited, e.g. “The OSC requires leverage disclosure in all suitability reviews [1].” Citation numbers are global across the entire report — numbering starts at [1] and increments with each new source in order of first appearance; numbers never reset between sections. - Sources section: Numbered list matching the inline references, in order of first appearance across the full report, each with a source label:
## Sources- [1] [Title of Page or Document](https://full-url.com) — Publisher/Domain [Primary]- [2] [Title](https://url) — Publisher/Domain [Secondary]- [3] [Title](https://url) — Publisher/Domain [Partial]Full file layout after research completes
Section titled “Full file layout after research completes”# Research Task
## Topic[user's topic]
## Parameters[checkboxes as filled]
## Additional Context[user's context]
---
# Research Report*Generated: YYYY-MM-DD · [model name — use the model currently running, e.g. Claude Sonnet 4.6]*
## Executive Summary*Confidence: [rating]*
[3–5 sentences]
## [Adaptive Section 1]*Confidence: [rating]*
[findings with inline citations]
## [Adaptive Section N]*Confidence: [rating]*
[findings with inline citations]
## Next Steps[actionable items]
## Sources ← only present if "Cited in report" is selected- [1] [Title](https://url) — Publisher [Primary]- [2] [Title](https://url) — Publisher [Secondary]- [3] [Title](https://url) — Publisher [Partial]Deferred to v2
Section titled “Deferred to v2”- Multi-AI dispatch (ChatGPT, Grok, Gemini) with synthesis pass
- Custom output path / Google Doc export
- “Scout + deep-dive” two-phase staged research
- Parallel specialist agents by research angle
- Numeric confidence score (0–100) in addition to qualitative rating
Implementation Notes
Section titled “Implementation Notes”- WSL path for Research folder:
/mnt/d/FSS/KB/Business/_WorkingOn/Research/ - Windows path:
D:\FSS\KB\Business\_WorkingOn\Research\ - Template filename:
_Research-Template.md(underscore prefix: sorted to top in Obsidian + excluded from auto-detect) - Research task file naming convention:
YYYY-MM-DD_Kebab-Case-Topic.md - The skill file will live in
~/.claude/skills/research/SKILL.md - The template file will be written to the Research folder as part of skill setup
- Re-run detection: exact byte-sequence search for
\n\n---\n\n# Research Report— robust to---or# Research Reportappearing separately in user-authored fields - Separator insertion: unconditional
\n\n---\n\nprepended to report on every run (template does not end with---) - Auto-detect glob: non-recursive
Research/*.md— subdirectories never scanned - Confidence ratings: suppressed when “No citations needed” is selected
- Confidence thresholds: deterministic per-section rules (all Primary/Secondary = High; mixed = Medium; majority Partial = Low; no full-text sources = Speculative)
AI Response — 2026-03-13
Section titled “AI Response — 2026-03-13”v1 of the /research skill is complete and smoke-tested (~/.claude/skills/research/SKILL.md). v2 implementation plan has been written and is ready for your review and approval.
Artifacts
Section titled “Artifacts”| Artifact | Windows Path |
|---|---|
| v1 Skill file | (WSL only — ~/.claude/skills/research/SKILL.md) |
| v1 Research template | D:\FSS\KB\Business\_WorkingOn\Research\_Research-Template.md |
| v2 Implementation Plan | D:\FSS\KB\Business\_WorkingOn\Tasks\2026-03-13_Research-Skill-v2-Plan.md |
v2 Plan Summary
Section titled “v2 Plan Summary”The plan covers 5 self-contained chunks, each buildable and testable independently:
| Chunk | Feature | Key Change |
|---|---|---|
| 1 | Scout + Deep-Dive | New **Search Strategy** template param (default: [x] Scout first); skill adds lightweight scout phase before deep research |
| 2 | Parallel Specialist Agents | Deep-dive becomes N parallel agents (one per angle) + one synthesis agent; globally sequential citations |
| 3 | Numeric Confidence Scores | Weighted formula (Primary=100, Secondary=80, Partial=40, Unverified=10); display: *Confidence: High (87)* |
| 4 | Multi-AI Dispatch | Optional ChatGPT/Gemini/Grok via curl; API key check; ## AI Source Analysis section in report |
| 5 | Google Doc Export | Python inline script via Bash; service account credentials; shareable link in confirmation |
Backward compatible throughout: v1 task files (missing new params) fall back to defaults silently.
Next Steps for Human
Section titled “Next Steps for Human”- Review v2 plan:
D:\FSS\KB\Business\_WorkingOn\Tasks\2026-03-13_Research-Skill-v2-Plan.md - Approve plan in Claude Code to begin implementation (or request changes first)
- Optionally: adjust chunk scope (e.g. defer Chunk 4 or 5 if not needed immediately)
Human Response
Section titled “Human Response”- The plan for version 2 looks good. Let’s test the multi-agent capacity using my API key for Google’s AI studio. Can I use an existing API key for other aI related tasks, or should I create a new separate one for this skill?
AI Response — 2026-03-13
Section titled “AI Response — 2026-03-13”API Key Question
Section titled “API Key Question”Use your existing Google AI Studio key — no need to create a new one.
Google AI Studio API keys are not scoped to a single application; they’re tied to your account/project and work fine across multiple uses. For a personal skill running locally, there’s no security or isolation benefit to separating it. The key will be stored as GEMINI_API_KEY in your WSL environment variables (never hardcoded), so it’s easy to revoke or rotate independently if ever needed.
The only reason to create a dedicated key is if you want separate usage/quota tracking in the AI Studio dashboard — which is optional and easily done later.
Implementation Status
Section titled “Implementation Status”v2 implementation is beginning now. Chunks execute in order; Chunk 4 (Multi-AI Dispatch including Gemini) will need GEMINI_API_KEY set in your WSL environment before testing that chunk.
Summary
Section titled “Summary”- Plan approved — proceeding with v2 implementation chunk by chunk
- API key question answered
Next Steps for Human
Section titled “Next Steps for Human”- Before Chunk 4 testing: run
export GEMINI_API_KEY=your_keyin WSL (or add to~/.bashrcfor persistence) - Monitor implementation progress in chat — I will report after each chunk
Human Response
Section titled “Human Response”- We need a mechanism to handle AI agents that pause based on usage limits.
- Most models allow a limited amount of free daily usage, even in deep thinking modes which is suitable for high quality researching.
- Claude Code has session and weekly limits, that can pause usage even with paid subscriptions.
- We need to specify “Deep Thinking” modes in all AI tools, when the depth input from the template is set to “deep”.