Skip to content

/research Skill — v2 Implementation Plan

Section titled “/research Skill — v2 Implementation Plan”

v1 (APPROVED, smoke-tested) delivers a single-Claude, single-agent research workflow. v2 upgrades to: Scout+deep-dive two-phase approach (parametrised, default on), parallel specialist agents per research angle, numeric confidence scores, multi-AI dispatch (ChatGPT/Gemini/Grok), and Google Doc export (last). The Never Guess rules and existing report format are immutable.

Target files:

  • ~/.claude/skills/research/SKILL.md — primary; receives all changes
  • /mnt/d/FSS/KB/Business/_WorkingOn/Research/_Research-Template.md — 3 new parameter blocks

Implementation order: Each chunk is self-contained; implement and test before starting the next. After Chunk 2 the system is fully usable at higher quality than v1. Chunks 3–5 are purely additive.


Chunk 1: Scout + Deep-Dive (Two-Phase Approach)

Section titled “Chunk 1: Scout + Deep-Dive (Two-Phase Approach)”

Template — add after **Depth** block, before **Goal**

Section titled “Template — add after **Depth** block, before **Goal**”
**Search Strategy**
- [x] Scout first (quick angle scan → focused deep dive — recommended)
- [ ] Direct dive (single-pass research, v1 behavior)

Step 3 — Parse Parameters: add row:

ParameterHow to parseDefault if absent
Search Strategy[x] under Search Strategy; if both or neither → Scout firstScout first

Step 5 — Status line: extend to: Researching: [topic] · [geography] · [depth] · Goal: [goal(s)] · Strategy: [Scout first | Direct dive] → appending to [filename]

Step 6 — replace “Dispatch Research Agent” with:

## Step 6: Strategy Branch
If Search Strategy = Scout first → execute Step 6a then Step 6b
If Search Strategy = Direct dive → skip to Step 6b (topic serves as decomposition input)
## Step 6a: Scout Phase (only when Scout first)
Dispatch a lightweight scout agent:
PROMPT:
You are a scout research agent. Identify research angles — do NOT write a report.
TOPIC: [topic]
GEOGRAPHY: [geography]
DEPTH: [depth]
GOAL(S): [goals]
CONTEXT: [context, or "(none)"]
Workflow:
1. Run 2–3 WebSearch queries in parallel (multiple WebSearch calls in one message).
Do NOT fetch any pages — search snippets only.
2. From results, identify 4–6 distinct research angles that together cover the topic
for the stated goal(s), appropriate for [Deep | Moderate] depth.
Return ONLY:
ANGLES:
- [Angle 1 title]: [one sentence]
- [Angle 2 title]: [one sentence]
...
TOP_SOURCES:
- [URL or domain]: [why promising — one phrase]
...
No prose, no preamble, nothing else.
After scout returns:
- Parse ANGLES: list.
- If fewer than 2 angles returned: fall back to Direct dive decomposition.
- Emit: "Scout complete: [N] angles → [Angle 1], [Angle 2], … — launching deep dive."
- Proceed to Step 6b with angle list in hand.
## Step 6b: Deep-Dive Phase
[Carry existing v1 agent prompt here, with one modification to Phase 1 / Decompose:]
### Phase 1 — Decompose
IF SCOUT ANGLES PROVIDED:
Use these pre-identified angles as your decomposition basis (do not re-derive):
Angles: [list from Step 6a]
IF DIRECT DIVE:
Identify [2–3 if Moderate | 4–6 if Deep] distinct sub-topics or angles...
[rest of v1 decompose instruction unchanged]
[Phases 2–5 and all Never Guess rules: unchanged from v1]

v1 task files (no Search Strategy) default to Scout first silently.

  • Scout first: run /research → verify scout status line appears, then report appended
  • Direct dive: switch checkbox, delete report, re-run → verify no scout status line
  • v1 file: run on old task file → defaults to Scout first, scout line appears, report correct

None (Depth still controls agent count: Moderate=2–3, Deep=4–6).

Replace Step 6b’s monolithic agent with three substeps:

## Step 6b.1 — Determine Angle List
- From scout (Step 6a): use parsed angles.
- From direct dive: decompose topic here (not delegated) into [2–3 Moderate | 4–6 Deep] angles.
List the angles before dispatching.
## Step 6b.2 — Dispatch Specialist Agents (all in parallel — one Agent tool call per specialist)
Each specialist receives:
PROMPT:
You are a specialist research agent, one of [N] parallel specialists.
Research ONLY your assigned angle. Do not attempt to cover the full topic.
TOPIC (overall): [topic]
YOUR ANGLE: [angle title — one sentence]
GEOGRAPHY: [geography]
DEPTH: [depth]
GOAL(S): [goals]
SOURCES MODE: [Cited in report | No citations needed]
CONTEXT: [context, or "(none)"]
Workflow:
Phase 1 — Search: WebSearch queries focused on your angle.
Deep: 3–4 queries in parallel. Moderate: 2 queries in parallel.
Phase 2 — Fetch:
Deep: fetch all key sources. Moderate: top 2 per query only.
Never hallucinate URLs. Only fetch URLs from WebSearch results.
Phase 3 — Assess Sources: label each source [Primary]/[Secondary]/[Partial]/[Unverified]
[Include full source label table from v1]
Phase 4 — Write Your Section(s):
Use goal-appropriate section headings:
[Include full Adaptive Sections by Goal table from v1]
Select only headings your angle's evidence supports.
Open each section: *Confidence: [qualitative rating]* ← qualitative only; synthesis adds numeric
Inline citations: [n] starting at [1] within your output only (synthesis renumbers globally).
End with:
LOCAL SOURCES:
- [1] [Title](URL) — Publisher [Label]
- [2] ...
Never Guess Rules (Non-Negotiable): [all 6 from v1 verbatim]
Output: ONLY your markdown section(s) + LOCAL SOURCES block. No preamble.
## Step 6b.3 — Synthesis Agent
Dispatch ONE synthesis agent after all specialists complete:
PROMPT:
You are a synthesis agent. Assemble [N] specialist outputs into one coherent report.
TOPIC: [topic] | GEOGRAPHY: [geography] | DEPTH: [depth] | GOAL(S): [goals]
SOURCES MODE: [Cited in report | No citations needed]
CONTEXT: [context, or "(none)"]
Specialist Outputs:
=== SPECIALIST: [Angle 1 title] ===
[full text of specialist 1 output]
=== END SPECIALIST ===
[... repeat for all N specialists ...]
Synthesis Workflow:
Step A — Deduplicate Sources
From all LOCAL SOURCES blocks: if same URL appears in multiple specialists,
merge into one entry (most informative title/label wins).
Assign new global citation numbers [1], [2], … in reading order of final assembled report.
Create local→global mapping per specialist.
Step B — Assemble Sections
Assemble all specialist sections in goal-based order:
[Include full Adaptive Sections by Goal table + multi-goal ordering rule from v1]
If two specialists wrote content for the same heading: merge prose into one section.
Update all inline citation references using global mapping from Step A.
Step C — Section Confidence (qualitative)
For each assembled section, apply v1 qualitative confidence rules using merged source labels:
[Include full v1 confidence rating table]
[Unverified] = [Partial] for all threshold calculations.
Open each section: *Confidence: [qualitative]* (numeric added in Chunk 3).
Executive Summary confidence = worst rating across all sections.
Step D — Executive Summary
3–5 sentences summarizing most important findings across all sections.
If Cited: open with *Confidence: [rating]*.
Step E — Final Report Assembly
Output in order: # Research Report → *Generated: …* → ## Executive Summary →
[adaptive sections] → ## Next Steps → ## Sources (if Cited)
[Include v1 citation format and Output Instructions verbatim]
Never Guess Rules: [all 6 from v1 verbatim]
Output: ONLY the complete report starting with # Research Report.
## Step 6b.4
Pass synthesis agent output to Step 7 (unchanged: append to file).
  • Deep task: confirm multiple agent dispatches visible before report completion
  • Verify citation numbers are globally sequential (no resets, no gaps)
  • Verify same URL appears only once in Sources
  • Verify section order matches goal

None.

1. Specialist agent prompt — end of Phase 4:

After LOCAL SOURCES, add:
SOURCE WEIGHTS:
- [1]: [Primary|Secondary|Partial|Unverified]
- [2]: ...
(one entry per source in LOCAL SOURCES)

2. Synthesis agent Step C — replace qualitative-only with qualitative + numeric:

Step C — Section Confidence (Qualitative + Numeric)
For each assembled section:
Numeric score:
Collect all source labels cited in the section (map local→global; use SOURCE WEIGHTS blocks).
Weight: Primary=100, Secondary=80, Partial=40, Unverified=10.
Section score = round(sum of weights / count of cited sources).
Example: 2 Primary + 1 Partial = round((100+100+40)/3) = 80.
Qualitative from score:
80–100 → High | 55–79 → Medium | 25–54 → Low | 0–24 → Speculative
Override rule: if numeric and qualitative-rules conflict, apply the more conservative rating.
Confidence line format (SOURCES MODE = Cited):
*Confidence: High (87)* ← qualitative + numeric in parentheses
Confidence line format (SOURCES MODE = No citations):
(no confidence line — omit entirely, same as v1)
Executive Summary score = minimum section score (same "worst section" rule).
  • Run Cited task, verify *Confidence: High (87)* format on every section + Executive Summary
  • Manual check: count source labels for one section, apply formula, confirm number matches
  • No citations task: verify zero confidence lines

Template — add after **Sources**, before ## Additional Context

Section titled “Template — add after **Sources**, before ## Additional Context”
**AI Sources**
- [x] Claude (always active)
- [ ] ChatGPT (requires OPENAI_API_KEY)
- [ ] Gemini (requires GEMINI_API_KEY)
- [ ] Grok (requires GROK_API_KEY)
<!-- Claude cannot be deselected. Additional AIs send parallel API queries merged into report. -->

Step 3 — Parse Parameters: add row:

ParameterHow to parseDefault if absent
AI SourcesAll [x] under AI Sources; Claude always includedClaude only

New Step 3a — API Key Check (after Step 3):

For each non-Claude AI selected:
Run: printenv [ENV_VAR] (OPENAI_API_KEY / GEMINI_API_KEY / GROK_API_KEY)
If missing: remove from active list; store warning "[AI] selected but [ENV_VAR] not found — skipped."
If all non-Claude AIs skipped → proceed as Claude-only (no warning yet; show in Step 8).
Store active external AI list.

Step 5 — Status line: when external AIs active, append · AI Sources: Claude, ChatGPT (etc.).

Step 6b.2 — extend parallel dispatch:

When external AIs active, run curl commands alongside specialist agents:

ChatGPT:
curl -s https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"[AI_QUERY_PROMPT]"}],"max_tokens":4096}'
Extract: .choices[0].message.content
Gemini:
curl -s "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro:generateContent?key=$GEMINI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"contents":[{"parts":[{"text":"[AI_QUERY_PROMPT]"}]}]}'
Extract: .candidates[0].content.parts[0].text
Grok:
curl -s https://api.x.ai/v1/chat/completions \
-H "Authorization: Bearer $GROK_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"grok-beta","messages":[{"role":"user","content":"[AI_QUERY_PROMPT]"}],"max_tokens":4096}'
Extract: .choices[0].message.content

AI_QUERY_PROMPT sent to external AIs:

Research: [topic]
Geography: [geography]
Goal(s): [goals]
Context: [context]
Provide key findings, important sources (with URLs if possible), and the most important angles.
Structure with markdown headings. Be precise and factual. No padding.

External AI outputs are labeled [AI-Generated] — not independently web-fetched by Claude.

Step 6b.3 — synthesis agent: extend specialist block + add Step 3.5:

Add to specialist block:

=== EXTERNAL AI: ChatGPT (gpt-4o) ===
[parsed response text]
=== END EXTERNAL AI ===
[repeat for each active external AI]

Add new synthesis Step 3.5 (after Step C, before Step D):

Step 3.5 — AI Source Analysis (only when external AIs active)
Write ## AI Source Analysis section:
- List contributing AIs (Claude specialists + external)
- For major findings: note agreement (appears in multiple AI outputs) or divergence (one AI only)
- If external AI contradicts Claude's web-researched finding: note the contradiction; favor Claude
- Keep to 3–6 bullet points; no inline citations in this section
Position in final report: immediately after ## Executive Summary, before adaptive sections.
Omit section entirely if only Claude is active.

Add [AI-Cited] to source label table: “URL provided by external AI — not independently fetched in this session.”

Step 8 — extend confirm message:

Done. Report appended to [full path]
[if API keys skipped] ⚠ Skipped: ChatGPT (OPENAI_API_KEY not found)
  • With OPENAI_API_KEY set, check [x] ChatGPT: verify ## AI Source Analysis section in report
  • Unset OPENAI_API_KEY: verify graceful skip warning in Step 8, Claude-only report
  • v1 task file: verify Claude-only, no AI Source Analysis section

Template — add after **AI Sources**, before ## Additional Context

Section titled “Template — add after **AI Sources**, before ## Additional Context”
**Output**
- [x] Append to task file (always)
- [ ] Also export to Google Doc

Step 3 — Parse Parameters: add row:

ParameterHow to parseDefault if absent
Output[x] items under Output; “Append to task file” always activeAppend to file only

Step 5 — Status line: when Google Doc active, append → … + Google Doc.

New Step 7b (after Step 7, before Step 8) — Google Doc Export:

## Step 7b: Google Doc Export (only when "Also export to Google Doc" is checked)
7b.1 — Check credentials
Run: printenv GOOGLE_SERVICE_ACCOUNT_JSON
Run: printenv GOOGLE_APPLICATION_CREDENTIALS
If neither set: store warning "Google Doc export skipped — credentials not found." → skip to Step 8.
7b.2 — Check library
Run: python3 -c "import googleapiclient; import google.oauth2" 2>&1
If ImportError: store warning "Google Doc export skipped — install: pip install google-api-python-client google-auth" → skip.
7b.3 — Export via Python inline script (python3 -c "...")
Auth: parse GOOGLE_SERVICE_ACCOUNT_JSON env var (JSON string) or load from GOOGLE_APPLICATION_CREDENTIALS path.
Scopes: ["https://www.googleapis.com/auth/documents", "https://www.googleapis.com/auth/drive"]
Create doc titled: "[topic] — Research Report [YYYY-MM-DD]"
Convert markdown to Docs format (priority: headings H1/H2/H3, bold, italic, links, bullets; plain text fallback).
Insert content via docs.batchUpdate.
Set sharing: drive.permissions().create(fileId=doc_id, body={"role":"reader","type":"anyone"})
Print: https://docs.google.com/document/d/[doc_id]/edit
7b.4 — Store result
On success: store Google Doc URL.
On failure (non-zero exit): store warning "Google Doc export failed: [stderr]" — do not fail skill.

Step 8 — extend confirm:

Done. Report appended to [full path]
[if Google Doc URL obtained] Google Doc: https://docs.google.com/document/d/[doc_id]/edit
[if Google Doc skipped/failed] ⚠ [warning message]
[if API keys skipped] ⚠ Skipped: [AI name] ([ENV_VAR] not found)
  • With credentials set, check [x] Also export to Google Doc: verify URL in confirmation, Doc accessible, sharing = Anyone with link
  • Without credentials: verify graceful skip warning
  • v1 task file: verify no Doc attempt, no warning

After all 5 chunks are implemented:

  1. Create a new task file from the updated template — verify all 6 parameter sections present with correct defaults
  2. Run /research (Scout first, Deep, all goals, Cited, Claude only): verify scout status line → parallel agent dispatches → numeric confidence scores → report appended
  3. Run again: verify re-run guard fires
  4. Run /research (Direct dive): verify no scout status line
  5. v1 compatibility: run old task file (missing new params) → verify defaults apply, correct report produced
  6. (Chunk 4 prereq) Multi-AI: verify with one external AI key set
  7. (Chunk 5 prereq) Google Doc: verify with credentials set
feat(research): v2 chunk 1 — scout phase
feat(research): v2 chunk 2 — parallel specialist agents + synthesis
feat(research): v2 chunk 3 — numeric confidence scores
feat(research): v2 chunk 4 — multi-AI dispatch
feat(research): v2 chunk 5 — Google Doc export

Note: SKILL.md and _Research-Template.md live outside the monorepo (~/.claude/skills/ and /mnt/d/FSS/KB/). Commits would be to the monorepo plan/spec docs only; skill files are not version-controlled.