When people hear about AI-assisted software development, they often imagine code generation first.

I think that is the wrong place to start.

Code is the easy part for AI. The harder part is knowing what to build, how the system should behave, which business rules matter, and how all of that stays consistent as the application grows. That is exactly why I created my most recent demo project: a small feedback application for our local Java User Group. It was not built as just another CRUD sample. It was created as a showcase for the AI Unified Process and for spec-driven development in practice in the feedback app repository on GitHub.

The project itself is intentionally simple. It is a feedback collection application built with Java 25, Spring Boot 4.0.3, Vaadin 25, jOOQ, PostgreSQL 17, Flyway, and Spring Security with passwordless email-based one-time-token authentication. That makes it modern enough to be realistic, but still small enough to understand end to end, as described in the project README of the feedback application.

What made this project interesting was not the size of the application. What made it interesting was the way it was developed.

The repository states this very clearly: the project was fully developed using the AI Unified Process, a spec-driven methodology that guides the lifecycle from requirements to deployment. And that is exactly what I wanted to demonstrate. AI Unified Process is not about asking an LLM to generate a pile of code from a vague prompt. It is about creating the right artifacts first, so AI has something precise and useful to work with, which is visible in the overview of the feedback app project.

Why a feedback app?

For a demo project, a feedback app is a very good candidate.

It has enough real functionality to matter. Users need to authenticate. They need to create forms, edit them, publish them, share them, collect anonymous responses, view results, and export them. At the same time, the domain is easy to understand. You do not need hours of business onboarding before you can discuss the behavior of the system. That is important when the real goal is to demonstrate a development process, not just an application. You can see this scope in the documented use cases of the feedback application.

In this case, the app was created as a showcase for a feedback solution that could be used in the context of our local Java User Group. That gave the project a concrete purpose. It was not a synthetic toy example. It had a simple but believable use case: create a form for a talk or event, share it, gather feedback, and analyze the results.

Starting with specifications, not code

What I like most about this project is that the repository makes the process visible.

The application is not only code. It also contains an entity model and a full set of documented use cases in docs/use_cases, including flows such as login, create form, edit form, publish form, submit feedback, view results, share form, generate QR codes, save templates, reopen or unpublish forms, and export results. In total, the project documents 16 use cases, and the test suite mirrors that structure with 16 use case tests from UC-01 to UC-16. This structure is visible in the use case documentation folder of the feedback app.

That is the point.

In many projects, documentation is either missing or written afterwards. It quickly becomes stale because the code evolves faster than the documents. In a spec-driven workflow, specifications are part of the development process itself. They are not decoration. They define the behavior that the implementation must follow.

You can see that very clearly in the use case UC-02 Create Form. It defines the goal, the actor, the preconditions, the main success scenario, the alternative flow for validation failure, and the postconditions. It even captures the business rule that every form receives a unique public token for access. That is much more useful than a ticket saying “build form creation dialog.”

The same is true for the use case UC-05 Submit Feedback. It specifies that the form is accessed through a public token, that rating questions use a 1 to 5 scale, that text answers are optional, and that duplicate submissions are softly prevented through a browser cookie. That is precisely the kind of behavioral clarity AI needs if you want generated code and tests to be reliable.

The entity model as a shared reference

Another thing I wanted to show with this demo is that good specifications are not limited to user flows.

The project also contains an entity model of the feedback application that describes the core concepts of the system: feedback forms, questions, responses, answers, shares, templates, template questions, and access tokens. The relationships are explicit. A feedback form contains questions and receives responses. A response contains answers. Templates contain reusable questions. Access tokens support passwordless login.

This matters because in real projects, misunderstandings often start at the data level. People use the same words for different things, or different words for the same thing. By writing down the entity model early, you create a shared language between business and development. That language then flows into the code, the database schema, the tests, and the prompts used with AI tools. You can see that shared vocabulary in the entity model documentation for the feedback app.

Traceability all the way into tests

One of the strongest aspects of this project is the traceability from specs to implementation.

According to the project README of the feedback application, the tests use Karibu Testing for server-side Vaadin UI testing, and there are 16 use case tests covering the full application workflow. Those tests are annotated with @UseCase so they can be traced back to the specifications in docs/use_cases.

This is exactly the kind of discipline I believe teams need in the AI era.

If an LLM generates code but you cannot relate that code to a documented behavior, you may get something that looks impressive in a demo but becomes hard to maintain in a real system. By contrast, when use cases drive the tests and the tests validate the behavior, AI becomes much more useful. It stops being a random code generator and starts becoming an accelerator inside a controlled engineering process.

A realistic technical stack

The technical choices in this project were also deliberate.

I used Spring Boot, Vaadin, and jOOQ because that stack is very well suited for business applications and for a spec-driven workflow. Vaadin makes UI behavior explicit in Java. jOOQ keeps database access close to the actual relational model and gives you type-safe SQL. Flyway provides a clear schema evolution path. PostgreSQL gives a solid database foundation. Spring Security handles authentication, and in this case the app uses passwordless one-time-token login by email. For local development and integration testing, Testcontainers starts PostgreSQL and Mailpit automatically. These choices are described in the technical overview in the feedback app repository.

That last part is also worth noting. The development setup uses a dedicated test entry point and containerized infrastructure so the project is easy to run and test in a realistic environment. Mailpit captures outgoing emails, which is especially useful for verifying the passwordless login flow. This keeps the project practical while still being demo-friendly, as explained in the development and testing setup of the feedback app.

Small project, big lesson

This feedback app is not meant to impress through domain complexity. It is meant to make a point.

The point is that AI Unified Process works best when it is visible in a small, complete, understandable project. You can open the repository and see the use cases. You can inspect the entity model. You can map the behavior to the tests. You can understand how the application is structured. And because the domain is simple, you can focus on the method instead of getting lost in business complexity. All of this is visible in the feedback app repository and its documentation.

That is exactly why showcase projects matter. They give teams something concrete. Instead of talking abstractly about “better prompts” or “AI-powered development,” you can show what disciplined AI-assisted development actually looks like.

§What this project demonstrates about AI Unified Process

For me, this repository demonstrates a few important things.

First, AI Unified Process is not theory. It can be applied to a complete working application, even a small one, in a way that is visible and inspectable. The repository itself says the project was fully developed using AI Unified Process, and the structure supports that claim through its documented use cases, entity model, traceable tests, and deployment-ready setup, all of which can be explored in the feedback app repository on GitHub.

Second, specifications are not overhead. In a project like this, they are the backbone. They make the intended behavior explicit before code is written. They reduce ambiguity. They create better input for AI. And they make the resulting system easier to validate and evolve. The use case UC-02 Create Form and the use case UC-05 Submit Feedback show that clearly.

Third, even a small app benefits from process discipline. That may sound obvious, but many teams still assume that structure only matters in large enterprise systems. I think the opposite is true. Small projects are often the best place to prove that a method works, because they remove excuses and make the essentials visible.

Final thoughts

I created this project as a showcase for a feedback app for our local Java User Group, but for me it is really a showcase for something bigger.

It shows that the future of AI-assisted software development is not only about generating code faster. It is about creating better specifications, clearer models, stronger traceability, and more reliable workflows. That is what AI Unified Process is trying to enable.

If AI makes implementation cheaper, then clarity becomes more valuable. And that is exactly what this small feedback app was built to demonstrate.