Leveraging LLMs with Architect
Castlecraft Architect was not only built with the assistance of Large Language Models (LLMs) but is also designed to empower developers to leverage LLMs effectively in their own software development lifecycle. This document explores how Architect facilitates this synergy, its current capabilities, and future potential.
Architect's Genesis: A Testament to LLM Collaboration
The development of Architect itself serves as a case study in LLM-assisted software engineering. By providing LLMs (like Gemini) with a structured context, including:
- Component Definitions: Detailed JSONSchemas and examples for each component type Architect manages.
- Revision System: Clear explanations of how component operations translate into revisions.
The LLM could understand how to generate code, suggest architectural patterns, and even create new revisions because the problem space was well-defined. As Architect evolved, its own generated state and component definitions became part of the context fed back to the LLM, creating a virtuous cycle of improvement. This iterative process, where developers act as reviewers and refiners of LLM-generated code, was fundamental.
Current Capabilities: Using LLMs with Your Architect Project
Even with the current tooling, primarily focusing on IDE plugins (like Gemini Code Assist) and Architect's CLI, there's immense potential to accelerate development:
Step 0: Use Case Definition (LLM-Assisted)
Before diving into Architect, teams can leverage LLMs to refine their initial ideas:
- Input: Raw wishlists, jumbled requirements, or high-level goals.
- LLM Task: Generate a structured use case document.
- Human Role: Review the draft, clarify ambiguities, answer LLM-posed questions, and validate assumptions.
- Outcome: A finalized use case document detailing scope, complexity, and depth, ready for architectural planning.
Step 1: Component Brainstorming & Initial Design
- Architect Task: Generate the current project state context using
architect state sync-from-code
andarchitect context combine-for-llm
. - LLM Task: Based on the finalized use case (from Step 0) and the project context, ask the LLM to suggest a list of necessary architectural components (Aggregates, DTOs, Services, Events, etc.) that Architect can understand.
- Human Role: Review the LLM's component suggestions. Iterate on the design, confirming choices like ReadModels, Event Sourcing, Sagas, and other patterns. Validate these decisions with domain experts.
- Outcome: A refined list of components to be implemented.
Step 2: Implementation Prioritization
- LLM Task: Request the LLM to prioritize the implementation order of the finalized components to enable progressive project development.
- Human Role: Review and adjust the prioritization based on team capacity, dependencies, and business value.
- Outcome: A high-level implementation roadmap or checklist, suitable for project tracking.
Step 3: Generating Revision Drafts
- LLM Task: Ask the LLM to generate a
revisions.json
file – an array ofComponentOperation
objects – based on a subset of prioritized components. - Architect Task:
- Use the Architect UI or CLI to create a new "Revision Draft" from this JSON.
- Visually inspect the proposed components, their relationships, generated imports, and naming conventions.
- Human Role: Manually refine the revision draft in the UI or by editing the JSON as needed.
- Outcome: A validated revision ready for application (either directly in local mode or via CI/CD in collaboration mode).
Step 4: Code Implementation & Testing
- Architect Task: Apply the revision to scaffold the component boilerplate. Regenerate the project state context (
architect context combine-for-llm
). This context now includes the newly scaffolded components and potentially an updated OpenAPI specification (due to new DTOs and Routers). - LLM Task: With the enriched context, ask the LLM to:
- Suggest implementations for the business logic within the scaffolded components (e.g., method bodies in services, aggregates).
- Generate unit and integration tests for these components.
- Human Role: Review, refine, and integrate the LLM-suggested code and tests.
This iterative cycle (Design -> Revise -> Scaffold -> Implement -> Test) can be repeated for different sets of components or features.
The Synergy of DDD and LLMs
While LLMs are versatile and can generate code for various patterns, the structured approach of Domain-Driven Design (DDD), which Architect promotes, offers significant advantages in an LLM-assisted workflow:
- Clarity for Humans and LLMs: DDD's emphasis on Bounded Contexts, ubiquitous language, and clear component roles (Aggregates, Value Objects, Domain Services, etc.) makes the codebase more understandable for both human developers and LLMs.
- Improved LLM Guidance: When developers understand the DDD patterns in the existing code, they can provide more precise and effective prompts to the LLM, leading to higher-quality suggestions.
- Maintainability at Scale: As projects grow, a well-defined DDD structure helps manage complexity. This organized structure also makes it easier for LLMs to understand the existing codebase and suggest coherent changes or new features.
- Ubiquitous Language: Using a consistent language between domain experts, developers, and even in prompts to LLMs, reduces misunderstandings and improves the accuracy of LLM outputs.
Indeed, you can even ask an LLM to summarize the context generated by architect state export-json
to get a quick overview of your project's current architectural state!
Broader LLM Applications
Beyond direct code and component generation, LLMs can assist in various related tasks throughout the project lifecycle:
- Drafting project proposals based on use cases and suggested architecture.
- Generating presentation outlines and discussion points for stakeholder meetings.
- Suggesting alternative technology stacks based on scale, complexity, and specific non-functional requirements.
- Advising on choices for databases (transactional, analytical, ETL) or specialized authorization engines.
Future Vision: UI Integration and Beyond
The Architect team is actively working on enhancing the UI to more seamlessly integrate these LLM-assisted workflows. The goal is to move beyond IDE plugins and provide these capabilities directly within the Architect UI, making them more accessible to a broader range of team members, including those less familiar with direct LLM prompting.
Community contributions in this area, especially regarding UI development and novel LLM integrations, are highly encouraged and welcomed. The collaboration between human expertise and LLM capabilities holds the key to significantly boosting the efficiency and quality of complex application development.