When I work with AI Unified Process, the focus is always on system use cases. They are the core artifact that connects requirements, existing systems, and generated implementation. This is not big upfront design. It is iterative and incremental. One use case at a time.
System Use Cases Are the Single Source of Truth
System use cases describe how the system behaves from the outside in a precise, testable way. They are the foundation for everything that follows.
There are two ways system use cases typically come into being:
- In new development, they are derived from requirements. Requirements tell us intent; system use cases turn that intent into concrete, observable behavior.
- In modernization projects, system use cases are often reverse engineered from the running system. The existing behavior becomes the specification.
In both cases, the goal is the same: make behavior explicit, precise, and measurable.
Business Use Cases vs System Use Cases
A lot of teams blur the meaning of use case. In AIUP I make a clear distinction:
Business use cases describe what the business wants to achieve. They are about goals and value.
System use cases describe how and why the system behaves. They are detailed, unambiguous, and written in a way AI and humans can understand.
Business use cases give direction. System use cases give precision.
Only system use cases drive code and test generation.
If you want to understand why user stories and other business-level artifacts are often a poor fit for spec-driven development, see: Why User Stories Are a Poor Fit for Spec-Driven Development
No Big Upfront Design – Just Iterative Incremental Work
AIUP is not about big upfront design. It is about iterative, incremental progress. One system use case at a time. A small use case is specified, implemented, and verified. Then the next. This keeps feedback tight and reduces risk.
This approach works for new development and for complex modernization projects.
Using AI Agents to Generate Code and Tests
Once system use cases are defined, I use AI agents like Claude Code to generate both production code and tests. This is real code and real tests, not stubs or placeholders. Tests are generated together with the code. That means behavior is specified and verified from two sides.
Generated output is always reviewed by a developer.
How Problems Are Handled
When generated code does not meet expectations, the fix depends on the root cause:
- If the problem comes from an unclear spec, the answer is to improve the system use case, not tweak code.
- If the problem comes from using libraries or frameworks incorrectly, then guidelines need improvement so the agent uses the stack correctly.
- For small issues, a manual fix in code is acceptable. Not everything must be regenerated. AIUP is practical, not dogmatic.
Why This Works
System use cases create a strong contract between intent and implementation.
They work for greenfield and brownfield. They allow AI to generate meaningful, executable code.
They make reviews focused and efficient. They turn problems into improvements of specs and guidelines.
AI becomes a reliable engineering tool, not a random generator.


