When you build business software, the same question keeps coming back: why does this code exist?
The honest answer is almost always: because some business person needed it. A clinic user needs to register a new pet owner. An accountant needs to close the books. A clerk has to approve a request. Every line of code in a serious business application traces back to a need like that.
But in most projects, this connection is lost. The requirement lives in a Word document on a shared drive. The code lives in a Git repository. The tests live next to the code. Nobody can tell you which test proves that requirement UC-003 actually works. And when the requirement changes, nobody knows which tests to update.
This is the problem traceability solves. In this post I want to explain why traceability matters, how I do it in the AI Unified Process (AIUP), and how the new AIUP Navigator for IntelliJ makes it part of your daily work. All examples come from the AIUP PetClinic, the demo project I use in my talks.
Why Traceability Matters
Let me start with a short story. A few years ago I joined a project where the team had built a beautiful test suite. Hundreds of tests, all green, fast feedback loop. Looked great.
Then the business asked a simple question: “We changed the rule for telephone numbers. Which tests cover it?”
Nobody knew. People searched the code for the word “telephone”. They found dozens of hits. Some were related, some were not. In the end, the team spent two days reading tests to find the relevant ones. Two days. For a question that should take two minutes.
This is what happens when tests and requirements live in two different worlds.
Traceability fixes this. When every test is connected to a use case and a business rule, you can answer questions like:
- Which tests prove that UC-003 works?
- If we change BR-002 (the telephone format rule), which tests do we need to review?
- Do we have a test for every business rule, or are some uncovered?
- This bug came from production. Which use case was wrong?
These are not academic questions. They are the daily work of building reliable software.
What I Mean by “Use Case”
Before I go further, let me be clear about the word “use case”. In AIUP, a use case is a small Markdown file that describes one piece of business behavior. It has an ID like UC-003, a title, the actors involved, the main success scenario, alternative flows, and business rules.
Here is a real example from the PetClinic project, UC-003-register-new-owner.md, shortened for readability:
# Use Case: Register New Owner
## Overview
Use Case ID: UC-003
Primary Actor: Clinic User
Goal: Add a new pet owner to the clinic so that their pets and visits can be tracked.
## Main Success Scenario
1. Clinic User chooses "Add Owner" from the Find Owners view.
2. System displays an empty owner form with fields for first name, last name, address, city, and telephone.
3. Clinic User fills in all fields and submits the form.
4. System validates that all fields are not blank and that telephone matches the 10-digit pattern.
5. System persists the new owner.
6. System navigates to the Owner Details view.
## Alternative Flows
### A1: Validation Errors
Trigger: One or more fields fail validation in step 4.
## Business Rules
### BR-001: Mandatory Fields
First name, last name, address, city, and telephone are required.
### BR-002: Telephone Format
Telephone must be exactly 10 digits (regex `\d{10}`).
This is small enough to read in one minute and specific enough to test. That is the sweet spot. Notice the named scenarios (Main Success Scenario, A1) and the named business rules (BR-001, BR-002). They become handles we can attach tests to.
How Tracing Works in AI Unified Process
The idea is simple. Each test that proves a use case gets an annotation. Here is the annotation as it is defined in the PetClinic project:
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface UseCase {
String id();
String scenario() default "Main Success Scenario";
String[] businessRules() default {};
}
Three fields. The id is required. The scenario defaults to the main success path. The businessRules field lists the rules a test verifies.
Here is the test class for UC-003 from the PetClinic, UC003RegisterNewOwnerTest. It uses Karibu Testing to drive the Vaadin UI without a browser:
@SpringBootTest
@Import(TestcontainersConfiguration.class)
class UC003RegisterNewOwnerTest extends PetClinicTestBase {
@Test
@UseCase(id = "UC-003")
void addingValidOwnerPersistsAndNavigatesToDetails() {
navigate(AddOwnerView.class);
test($(TextField.class).withCaption("First Name").single()).setValue("Jane");
test($(TextField.class).withCaption("Last Name").single()).setValue("Whitfield");
test($(TextField.class).withCaption("Address").single()).setValue("123 Oak St");
test($(TextField.class).withCaption("City").single()).setValue("Madison");
test($(TextField.class).withCaption("Telephone").single()).setValue("5551234567");
test($(Button.class).withText("Add Owner").single()).click();
// routed to owners/<newId>
String path = UI.getCurrent().getInternals().getActiveViewLocation().getPath();
assertTrue(path.matches("owners/\\d+"));
}
@Test
@UseCase(id = "UC-003", businessRules = "BR-001", scenario = "A1: Validation Errors")
void missingRequiredFieldsBlockCreation() {
navigate(AddOwnerView.class);
test($(Button.class).withText("Add Owner").single()).click();
assertTrue($(TextField.class).withCaption("First Name").single().isInvalid());
assertTrue($(TextField.class).withCaption("Last Name").single().isInvalid());
// ...
}
@Test
@UseCase(id = "UC-003", businessRules = "BR-002", scenario = "A1: Validation Errors")
void telephoneMustBeTenDigits() {
navigate(AddOwnerView.class);
// fill in valid values, but a short telephone
test($(TextField.class).withCaption("Telephone").single()).setValue("555");
test($(Button.class).withText("Add Owner").single()).click();
assertTrue($(TextField.class).withCaption("Telephone").single().isInvalid());
}
}
Look at how the three tests divide the work:
- The first test covers the main success scenario. The
scenarioparameter is omitted because it defaults to Main Success Scenario. - The second test covers alternative flow A1 with business rule BR-001 (mandatory fields).
- The third test covers the same alternative flow A1, but with business rule BR-002 (telephone format).
Each @UseCase annotation does two things at once. It documents why this test exists, and it creates a link back to the spec file. That is the whole trick. One annotation per test method. The rest is tooling.
The Pain Without Tooling
So you add the annotations. Good. But now you have a new problem.
When you read a test, you want to see the spec. So you open the project, search for UC-003, find the file, open it. Read the relevant scenario or business rule. Go back to the test.
When you read a spec, you want to see the tests. So you search the codebase for @UseCase(id = "UC-003". Sometimes you forget the format. Sometimes you find too many results. You open each one to see if it is the right method.
I measured this once on a normal working day. I jumped between specs and tests around forty times. Each jump took me ten to twenty seconds. That adds up to ten minutes a day, every day, for the rest of the project. Worse, every jump breaks my train of thought.
AIUP Navigator: Make Traceability Tangible
This is why I built the AI Unified Process Navigator for IntelliJ.
The plugin reads your @UseCase annotations and your spec files. It then adds two small things to your editor:
- Next to every test method with
@UseCase(id = "UC-003"), a small icon appears in the gutter. Click it and you jump toUC-003-register-new-owner.md. - Next to every use case heading in your spec file, the same icon appears. Click it and you see all tests that cover this use case. If there is only one, you jump straight to it. If there are several, you get a list.
That is it. No new workflow. No new tool to learn. The traceability is just there, where you work.
The effect is bigger than it sounds. Because the jumps are now free, you start using them all the time. When telephoneMustBeTenDigits fails, you click the gutter icon, read BR-002, and you know exactly what the test is supposed to prove. When BR-002 changes from 10 digits to international format, you open the spec, click the icon, and see the one test that needs to change. The two artifacts become one connected thing in your head.
A Practical Workflow
Here is how I work day-to-day with this setup, using the PetClinic as a concrete example.
Starting a new feature. The clinic owner says: “We want to book visits for pets.” I write UC-009-book-visit-for-pet.md. I list the main scenario, the alternative flows (missing description, pet not owned by the given owner), and the business rules (description required, default date is today, owner/pet consistency). I review it with the business person. Once they agree, the spec is ready.
Writing the code. I open the spec next to my IDE. I write a failing test annotated with @UseCase(id = "UC-009") for the main scenario. I make it green. Then I write a test for BR-001: Description Required with the right businessRules parameter. Repeat for each rule and each alternative flow.
Reviewing. When the work is done, I open the spec and click through every test from the gutter. I check that every business rule has at least one test. If something is missing, I see it immediately.
Maintenance. Six months later, the business wants to allow visits to be booked up to one year in the future. I open UC-009, add a new business rule BR-004, and click the icon to see all existing tests. I add a new test annotated with businessRules = "BR-004". Done in minutes, not hours.
Onboarding. A new colleague joins. I tell them: read the spec files in docs/use_cases/. When you find one you want to understand in depth, click the icon and read the tests. The tests are the precise version of the spec.
Common Questions
Do I need AIUP to use this? No. The annotation is plain Java. The spec is plain Markdown. The plugin works with any Java project that follows the convention. AIUP is the bigger methodology around it, but tracing tests to use cases is useful on its own.
What about Vaadin tests with Karibu? They work the same. The PetClinic examples above are Karibu tests, driving Vaadin views without a browser. The plugin treats every JUnit test the same way, no matter what framework runs inside.
What about integration tests? Same thing. Unit tests, integration tests, end-to-end tests. They can all carry a @UseCaseannotation. Some business rules are best covered by an end-to-end test, some by a fast unit test. The annotation does not care.
What if a test covers more than one use case? It can, but in my experience, when a test covers more than two use cases, it is usually a sign that the test is too big or the use cases are too small. The id field is a single string in the PetClinic version of the annotation, which is also a gentle nudge in the right direction.
Is this not just BDD with extra steps? It is similar in spirit. But BDD often forces a specific syntax (Given-When-Then in Gherkin) and a specific tool (Cucumber). AIUP traceability is much lighter. Plain Markdown for the spec, plain JUnit for the test, one annotation to connect them. You keep all the power of your normal test framework.
Try It
The AI Unified Process Navigator is free on the JetBrains Marketplace. Install it, clone the AIUP PetClinic, open UC003RegisterNewOwnerTest, and click the gutter icon. You will feel the difference in five minutes.
If you want to learn more about the AI Unified Process and spec-driven development, visit unifiedprocess.ai.
The goal is simple: every line of code traces back to a business requirement. The Navigator just makes that traceability visible while you work.


