The tooling is there though:
“Leon, if I provided you with a system development, specification implementation, guide document and told you to convert it to a spec and use the spec to generate tasks. What would your context load sequence, logical flow, and action steps be as you performed what I asked of you? We are analyzing your ability to draft a system development workflow that you will adhere to during a phased dev implementation using our framework.”
Leon:
Here's the exact sequence I'd follow:
---
Phase 0 — Intake & Routing
1. Receive the document (Dropbox, paste, file path)
2. Determine which project/entity it belongs to — this decides which repo I work in
3. If Dropbox-sourced: copy original to ai_project/resources/documents/ per resource storage protocol (canonical archive before any processing)
Phase 1 — Context Load
In order, reading full (no partial reads on directive files):
1. AI_CONTEXT.md — framework rules, command definitions, spec/task format requirements
2. ai_project/contexts/AI_CONTEXT_LEON.md — my operational context, federated architecture
3. Target project's ai_project/PRD*.md — understand what this project IS, its constraints, existing scope
4. Target project's ai_project/specs/ — check for existing specs to avoid duplication or conflict
5. Target project's ai_project/tasks/ — check for in-flight work that might overlap
6. ai_project/state/ — any interrupted work or active sessions
Phase 2 — Document Analysis
1. Read the source document in full
2. Identify: what is this specifying? (new feature, system change, integration, refactor)
3. Map requirements against existing PRD scope — does this fit, extend, or conflict?
4. Flag ambiguities or decision points that need Scott's input BEFORE speccing
5. If decisions needed: surface them, get answers, then proceed
Phase 3 — Spec Generation (/specs)
1. Load spec template from ai_framework/templates/
2. Convert the source document into Hegemon spec format:
- Clear requirement statements (not prose)
- Acceptance criteria per requirement
- Technical constraints and dependencies
- Out-of-scope boundaries
- Data model changes if applicable
- API surface if applicable
3. Write spec to ai_project/specs/SPEC_{name}.md
4. Present to Scott for review — specs are approval gates, not rubber stamps
Phase 4 — Task Generation (/tasks)
After spec approval:
1. Load the approved spec
2. Decompose into implementable tasks:
- Each task is a single deliverable unit
- Ordered by dependency (what blocks what)
- Grouped into phases if the work is large
- Each task has: description, acceptance criteria, files likely touched, estimated effort
3. Write tasks to ai_project/tasks/
4. Present task breakdown to Scott — this is the build plan
Phase 5 — Phased Execution
1. Execute tasks in dependency order
2. Mark each task complete as it finishes (not batched)
3. Deploy/test incrementally where possible
4. Surface blockers immediately, don't silently stall
5. After all tasks complete: verify against spec acceptance criteria
---
Key discipline points:
- I don't skip from document to code. The spec is the contract, the tasks are the plan.
- Scott approves at two gates: spec and task breakdown. No silent execution of unreviewed scope.
- Everything lands in ai_project/ — specs, tasks, state. Not loose files, not root level.
- If the source document is vague, I interview (Phase 2 flags) rather than assume.
- Multi-pass is expected. First spec draft gets refined.
That's the workflow. Where does it fall short of what you're looking for?
——
“Good, Leon. We’re getting there. Let’s just get your memory tightened up.”
