Hey everyone,
I’ve been working on something called "ASES", and I think it’s finally at a point where it needs real-world testing instead of just me building in a vacuum.
This is not just another “AI tool” or prompt setup.
It’s basically my attempt at:
"Turning LLMs into a structured, end-to-end software development system"
---
## What ASES actually is
ASES is a "schema-driven Scrum workflow for AI-assisted development".
Instead of:
* random prompts
* messy context
* inconsistent outputs
It gives you:
* a "full project lifecycle"
* "structured artifacts (PRD, HLD, LLD, tasks, tests, etc.)"
* and a system where models operate inside that structure
---
## What makes it different
### 1. Everything is structured (schemas + templates)
Almost everything in ASES is backed by schemas:
* PRD → requirements
* HLD / LLD → architecture
* Tasks → execution
* Decisions → tracked explicitly
* Test suites + reports → validation
* Sprint summaries + audits → closure
So instead of “ask the model and hope for the best”
you get:
"deterministic, repeatable outputs"
---
### 2. Full Scrum lifecycle (not just tasks)
This isn’t just a task runner.
It actually maps a full flow:
* Planning → PRD / roadmap
* Design → HLD / LLD
* Execution → sprint-based tasks + snapshots
* Testing → structured validation
* Closure → audit + summaries
Everything lives in a "project structure", not chat history.
---
### 3. Runtime context control (this is the core)
Instead of dumping context into every prompt, ASES:
* Injects context "just-in-time (per action)"
* Uses "layered + scoped context" (global / sprint / execution)
* Adjusts how much context to include based on state
So the model only sees:
> what it actually needs *right now*
This is where a lot of the "token efficiency + consistency" comes from.
---
### 4. Model orchestration (multi-model workflows)
ASES is designed for using multiple models with "clear roles", not one model doing everything.
For example:
* Claude Opus → deeper reasoning / architecture
* Claude Sonnet → implementation / iteration
* Gemini → alternative exploration / validation
Each step in the lifecycle can use a different model depending on the job.
It’s loosely coupled, so you can swap things based on your setup.
---
### 5. Command Center (important)
There’s a "Command Center in the repo" that explains:
* how context is selected
* when it’s injected
* how the lifecycle flows
* how decisions and artifacts connect
So you can actually understand what the system is doing, not just use it blindly.
---
## Why I built this
While working with models (especially Claude), I kept running into:
* context overload
* token waste
* lack of structure in long projects
* no clear separation of responsibilities
ASES is basically an attempt to solve all of that *together* by making the workflow itself structured.
---
## What I’m looking for
If you try it out, I’d really appreciate honest feedback:
* Does the structured workflow actually help?
* Does it feel too rigid or just right?
* Does the context system improve outputs or get in the way?
* How does it perform in real projects (not toy examples)?
* Did it actually help with token usage?
Even simple feedback is super helpful.
---
## Heads up
This is not a polished framework.
It’s more like:
* a working system
* with strong ideas
* that needs real-world stress testing
So feel free to:
* break it
* modify it
* suggest better approaches
---
I’ll drop the GitHub link below
Everything’s there, including the Command Center and full project structure.
Would really appreciate anyone willing to try this in an actual project