How you integrate an LLM into your application fundamentally depends on who has control: does your code decide the flow while the LLM just responds, or does the LLM decide what to do next? This distinction leads you to two very different architectural patterns. Pattern 1: LLM as Infrastructure When the interaction is simple—you send a prompt, you receive a response—the LLM is infrastructure. It’s just another external service, like a translation service or geocoding. ...
Domain Instrumentation: Keeping Use Cases Expressive
Instrumentation is essential in any application: logs for debugging, metrics for monitoring, and traces to understand execution flow. But when this instrumentation is mixed directly into use cases, the code becomes difficult to read, maintain, and especially, to test. In this post, I’ll explore a pattern I regularly apply: abstracting instrumentation behind domain-specific interfaces, keeping use cases expressive and focused on business logic. Additionally, it helps me postpone infrastructure decisions and dramatically improves test quality. ...
Domain Events vs Integration Events
In event-driven architectures, it’s useful to distinguish between events based on their purpose: communication within the domain versus integration between bounded contexts. This distinction helps better manage coupling and system evolution. The Context When everything is modeled as “an event,” tensions emerge. A single message attempts to serve two different purposes: Communicate domain facts to coordinate reactions within the bounded context. Expose state changes as an integration contract with other bounded contexts. For example, when a product is put on sale in an e-commerce system: ...
Maintaining the discipline in AI-Assisted TDD
After several months experimenting with AI-assisted TDD, I’ve refined a workflow that allows me to combine the strict discipline of traditional TDD with the capabilities of AI assistants. It’s not about accelerating the process at any cost, but about maintaining quality and control while the agent handles the more mechanical tasks. The challenge of staying on track One of the main problems when working with AI assistants in TDD is maintaining process discipline. The temptation is real: letting the agent generate both tests and implementation in one go seems efficient. But by doing so, we lose the fundamental benefits of TDD. We lose the emergent design guided by tests, the minimal necessary implementation, and that confidence the process gives you when you need to refactor. ...
The Test List as a Guide in AI-Assisted TDD
The practice of AI-assisted TDD (Test-Driven Development supported by artificial intelligence) combines the core principles of TDD with the capabilities of large language models (LLMs). In this approach, the developer collaborates with an AI assistant during the development cycle, leveraging its ability to generate code and maintain the iterative rhythm characteristic of TDD. One of the main challenges when working with AI assistants is preserving consistency and direction throughout the process. In this context, it becomes especially relevant to revisit a concept introduced by Kent Beck more than two decades ago: the test list as a guide for development. ...