Introduction
In fast-moving software projects, documentation often struggles to keep up. Folder structures evolve, testing practices improve, coding standards shift, and tools are upgraded—usually without corresponding updates to the docs. Traditionally seen as a secondary task to be completed after coding, documentation often ends up outdated, incomplete, or inconsistent, failing to reflect the current state of the codebase. Yet documentation is essential: it standardizes practices across the team and plays a vital role in onboarding new developers.
In the AI era, documentation has become even more critical. Good documentation provides context that AI tools can leverage to understand your project’s structure, patterns, and goals. This improves the quality of code generation, refactoring suggestions, and overall AI alignment with your team’s practices. Today, AI makes it possible to shift documentation writing left—moving it earlier in the development cycle, alongside coding, testing, and design.
By integrating AI-assisted documentation into day-to-day development, teams can create a continuous feedback loop where docs evolve in sync with the code. This shift turns documentation from a static artifact into a living, reliable part of the project—valuable not only for human developers but also for enhancing AI’s effectiveness.
This article explores how AI can be used to automate and streamline documentation, transforming it from a post-development chore into an integrated part of the software lifecycle. By generating documentation as code is written, teams can ensure that docs stay current and relevant.
Markdown Configuration (MDC)
Markdown Configuration (MDC) refers to the use of structured Markdown files—not just for traditional documentation, but as machine-readable configuration that’s easy for both humans and tools to understand. In modern development workflows, MDCs define project-specific settings, coding standards, architecture overviews, component specs, and even build or deployment instructions.
Think of them as a blend of documentation and lightweight configuration files—simple to write, easy to version-control.
When structured consistently and enriched with annotations, these files become powerful resources—not just for onboarding developers, but also for AI tools that use project context to assist with code generation, refactoring, and documentation.
Technical Documentation as a Living Asset
Technical documentation is essential for onboarding, maintainability, and ensuring team-wide consistency. A well-documented project helps new team members understand system architecture and adopt consistent development practices from day one.
The problem? Documentation often becomes stale. It’s written once, then forgotten—missing updates, ignoring new features, and failing to reflect architectural changes. This leads to confusion, wasted time, and growing technical debt.
This is where Markdown Configuration files shine. These structured docs serve a dual purpose: they’re readable by developers and interpretable by machines. When maintained properly, they provide a centralized, version-controlled source of truth for your engineering practices.
By integrating these documents into the development lifecycle—and using AI to help keep them current—they become a living knowledge base. AI can analyze changes in the codebase and suggest updates, or even auto-generate new documentation, ensuring everything stays in sync.
Project Structure
One essential Markdown Configuration (MDC) file is project-structure.mdc
. This file outlines the global folder layout of the project, along with naming conventions for directories and files. A well-documented structure helps developers quickly navigate the codebase, locate relevant components, and create new files that follow established patterns.
Clear structure isn’t just helpful for humans—it’s also a major advantage for AI tools. With a documented layout, AI doesn’t have to scan the entire project blindly. Instead, it can use folder conventions and naming schemes to find files more efficiently and generate suggestions that align with the project’s organization.
A consistent project structure also reduces the risk of files being added in the wrong place or following inconsistent naming. Over time, this helps maintain a clean, scalable architecture as the project grows and evolves.
# Project Structure
## Main Structure
- **Modular Structure:** Organize your application into logical modules based on functionality (e.g., `controllers`, `models`, `routes`, `middleware`, `services`).
- **Configuration:** Separate configuration files for different environments (development, production, testing).
- **Public Assets:** Keep static assets (CSS, JavaScript, images) in a dedicated `public` directory.
- **Views:** Store template files in a `views` directory. Use a template engine like EJS or Pug.
- **Example Structure:**
my-express-app/
├── controllers/
│ ├── userController.js
│ └── productController.js
├── models/
│ ├── user.js
│ └── product.js
├── routes/
│ ├── userRoutes.js
│ └── productRoutes.js
├── middleware/
│ ├── authMiddleware.js
│ └── errorMiddleware.js
├── services/
│ ├── userService.js
│ └── productService.js
├── config/
│ ├── config.js
│ └── db.js
├── views/
│ ├── index.ejs
│ └── user.ejs
├── public/
│ ├── css/
│ │ └── style.css
│ ├── js/
│ │ └── script.js
│ └── images/
├── app.js # Main application file
├── package.json
└── .env # Environment variables
## File Naming Conventions
- **Descriptive Names:** Use clear and descriptive names for files and directories.
- **Case Convention:** Use camelCase for JavaScript files and directories. For components consider PascalCase.
- **Route Files:** Name route files according to the resource they handle (e.g., `userRoutes.js`, `productRoutes.js`).
- **Controller Files:** Name controller files according to the resource they handle (e.g., `userController.js`, `productController.js`).
- **Model Files:** Name model files after the data model they represent (e.g., `user.js`, `product.js`).
## Module Organization
- **ES Modules:** Use ES modules (`import`/`export`) for modularity.
- **Single Responsibility Principle:** Each module should have a single, well-defined responsibility.
- **Loose Coupling:** Minimize dependencies between modules to improve reusability and testability.
## Component Architecture
- **Reusable Components:** Break down the application into reusable components (e.g., UI components, service components).
- **Separation of Concerns:** Separate presentation logic (views) from business logic (controllers/services).
- **Component Composition:** Compose complex components from simpler ones.
### Code Splitting Strategies
- **Route-Based Splitting:** Split code based on application routes to reduce initial load time. Use dynamic imports (`import()`) to load modules on demand.
- **Component-Based Splitting:** Split code based on components, loading components only when they are needed.
For example, in an Express application, you might document utilities, packages, or scripts, as well as tools like Turborepo used in monorepo setups.
Testing
Another valuable MDC file is testing.mdc
, which outlines your project’s testing stack—technologies used, file structures, naming conventions, test types (unit, integration, end-to-end), mocking strategies, and test commands.
Testing is a core part of the development lifecycle. It prevents regressions, builds developer confidence, and ensures consistent quality. Standardizing testing practices across projects makes it easier for engineers to work across multiple codebases.
For AI-assisted development, this documentation is especially useful. When the AI understands your test setup, it can write, update, and execute tests more effectively. A powerful approach is guiding the AI toward Test-Driven Development (TDD)—writing tests first, then generating code to satisfy those tests.
The AI can even run the test suite, analyze failures, and suggest (or implement) fixes. Feeding it test coverage reports allows it to identify gaps and propose new tests to improve overall reliability.
# Testing Guidelines
This document outlines the project's testing standards and practices using Playwright.
## Test Structure
1. All tests must be placed in the `tests` directory
2. Tests should be organized by feature or component
3. File naming convention: `featureName.test.ts` or `componentName.test.ts`
## Test Execution
- Tests are run using Playwright through Bun
- Execute tests with `bun run test`
- Tests are configured to run in parallel by default
```bash
# Run all tests
bun run test
```
## Data Management
- Test data must be created programmatically before tests
- All test data must be cleaned up after tests complete
- Use test hooks for setup and teardown:
```typescript
// Example test with data setup and cleanup
import { test, expect } from '@playwright/test';
test.describe('Recipe feature', () => {
let testRecipeId: string;
test.beforeEach(async ({ request }) => {
// Create test data
const response = await request.post('/api/recipes', {
data: {
title: 'Test Recipe',
ingredients: ['Ingredient 1', 'Ingredient 2'],
instructions: ['Step 1', 'Step 2']
}
});
const data = await response.json();
testRecipeId = data.id;
});
test.afterEach(async ({ request }) => {
// Clean up test data
await request.delete(`/api/recipes/${testRecipeId}`);
});
test('should display recipe details', async ({ page }) => {
// Test implementation
});
});
```
## Feature Coverage
- Tests must cover all requirements specified in feature files
- Reference feature requirements by adding a comment with the feature name:
```typescript
// Covers @login feature: User can sign in with email and password
test('user can sign in with valid credentials', async ({ page }) => {
// Test implementation
});
```
- Ensure all scenarios from feature files have corresponding tests
- Mark tests that cover specific feature scenarios with appropriate tags
## Prisma Mock
```ts
import { beforeEach } from "vitest";
import prisma from "@/utils/__mocks__/prisma";
vi.mock("@/utils/prisma");
describe("example", () => {
beforeEach(() => {
vi.clearAllMocks();
});
it("test", async () => {
prisma.group.findMany.mockResolvedValue([]);
});
});
```
## Best Practices
1. Isolate tests to prevent interdependencies
2. Use descriptive test names that explain the behavior being tested
3. Favor page object models for complex UI interactions
4. Minimize test flakiness by using robust selectors
5. Include appropriate assertions to verify expected behaviors
6. Use test.only() during development but remove before committing
7. Keep tests focused on a single behavior or functionality
The stack.mdc
file defines your project’s approved technologies, libraries, and tools—along with usage guidelines and best practices. This is especially important in collaborative or AI-assisted environments.
Without clear definitions, AI tools might introduce redundant dependencies, inconsistent patterns, or unfamiliar libraries. By clearly documenting the stack, you set boundaries and equip both your team and AI with a shared toolkit. The result: cleaner, more consistent, higher-quality code.
# Tech Stack Overview
## Frontend
- React (18.x) – Component-based UI development
- Tailwind CSS (3.x) – Utility-first CSS framework
- Vite – Fast bundler and dev server
## Backend
- Node.js (18.x) – Runtime environment
- Express – REST API framework
- Mongoose – MongoDB ODM
## Testing
- Jest – Unit and integration testing
- Supertest – HTTP assertions for Express apps
## Tooling
- ESLint – Code linting
- Prettier – Code formatting
- Husky + lint-staged – Pre-commit hooks
- Turborepo – Monorepo orchestration
The stack.mdc
is a list of the primary technologies and dependencies used in the project. Use smaller rules to detail installation steps, usage examples, and implementation patterns specific to each tool or dependency.
# UI Components and Styling
## UI Framework
- Use Shadcn UI and Tailwind for components and styling
- Implement responsive design with Tailwind CSS using a mobile-first approach
## Install new Shadcn components
```sh
npx shadcn@latest add COMPONENT
```
Example:
```sh
npx shadcn@latest add progress
```
Features Documentation
Feature documentation is often overlooked, but it’s essential for collaboration, maintainability, and aligning engineering with product goals.
A well-structured feature-name.mdc
file acts like a functional ledger for your product. It bridges the gap between product managers, engineers, and QA, and provides essential context for AI-assisted workflows.
Each feature document should include:
-
Title: A clear, concise name for the feature
-
Description: A short explanation of the feature’s purpose and behavior
-
Gherkin Scenarios: BDD-style specs describing user interactions and outcomes
-
Technical Design & Decisions: Key architectural choices and trade-offs
-
Implementation Status: Implemented scenarios
-
User Flow Diagram: A visual of how users interact with the feature
Gherkin scenarios are particularly valuable for AI. They provide structured, real-world behavior expectations—so the AI generates relevant tests instead of generic or inaccurate ones. Grounding AI-generated tests in Gherkin ensures they reflect real user flows and acceptance criteria.
# User Authentication System
## Feature Overview
**Title:** Comprehensive User Authentication System
**Description:** A complete authentication system allowing users to sign up, log in, recover passwords, manage sessions, and access protected routes with role-based permissions.
## User Stories & Acceptance Criteria
### Primary User Story
As a user of the X platform, I want a secure and intuitive authentication system so that I can access personalized features while ensuring my account remains secure.
### Gherkin Scenarios
#### Registration Flow
**Scenario 1: New User Registration**
Given I am an unregistered user on the signup page
When I enter a valid email "[email protected]"
And I enter a valid username "newuser"
And I enter a valid password that meets strength requirements
And I confirm my password correctly
And I accept the terms and privacy policy
And I click the "Create Account" button
Then my account should be created successfully
And I should receive a confirmation email
And I should be redirected to the login page with a success message
**Scenario 2: Registration with Validation Errors**
Given I am an unregistered user on the signup page
When I enter an invalid email format
Or I enter a username that is too short
Or I enter a password that doesn't meet strength requirements
Or my password confirmation doesn't match
And I click the "Create Account" button
Then I should see specific validation error messages for each invalid field
And my account should not be created
**Scenario 3: Registration with Existing Email**
Given I am on the signup page
When I enter an email that is already registered
And I complete all other fields correctly
And I click the "Create Account" button
Then I should see an error message indicating the email is already in use
And I should be offered a link to the login page
And I should be offered a link to the password recovery flow
#### Authentication Flow
**Scenario 4: Successful Login**
Given I am a registered user on the login page
When I enter my registered email
And I enter my correct password
And I click the "Log In" button
Then I should be authenticated successfully
And I should be redirected to the dashboard
And my authentication status should persist across page refreshes
**Scenario 5: Failed Login Attempts**
Given I am on the login page
When I enter correct email but incorrect password 3 times in succession
Then I should see a CAPTCHA challenge on my 4th attempt
And if I fail 5 total attempts, I should be temporarily rate-limited
And I should see a message suggesting password recovery
**Scenario 6: Remember Me Functionality**
Given I am on the login page
When I enter valid credentials
And I check the "Remember Me" option
And I click the "Log In" button
Then my session should persist even after browser restart
Until I explicitly log out or the extended session expires
#### Password Management
**Scenario 7: Password Recovery Request**
Given I am on the "Forgot Password" page
When I enter my registered email address
And I click the "Reset Password" button
Then I should receive a password reset email
And I should see a confirmation message on the page
**Scenario 8: Password Reset**
Given I have received a password reset email
When I click the reset link in the email
And I enter a new valid password
And I confirm the new password
And I submit the form
Then my password should be updated successfully
And I should be redirected to the login page with a success message
#### Session Management
**Scenario 9: Session Timeout**
Given I am logged in but inactive for the session timeout period
When I try to access a protected route
Then I should be redirected to the login page
And I should see a message indicating my session has expired
**Scenario 10: Manual Logout**
Given I am logged in
When I click the "Logout" button in the navigation
Then I should be logged out immediately
And I should be redirected to the home page
And I should no longer have access to protected routes
#### Authorization & Access Control
**Scenario 11: Role-Based Access**
Given I am logged in with "user" role
When I attempt to access an admin-only route
Then I should be denied access
And I should see an "Unauthorized" message
**Scenario 12: Auth State in UI**
Given I am logged in
When I view the navigation bar
Then I should see my username/avatar
And I should see logout option
And I should not see login/signup options
## Technical Design and Decisions
### Component Architecture
- **AuthProvider**: Context provider for global auth state
- **LoginForm**: Component handling user login
- **SignupForm**: Component handling user registration
- **PasswordResetForm**: Component for password reset
- **ProtectedRoute**: HOC/middleware for route protection
- **UserMenu**: Dropdown with user options including logout
### Data Models
```typescript
interface User {
id: string;
email: string;
username: string;
created_at: string;
last_login: string;
role: 'user' | 'admin';
profile_image?: string;
}
interface Session {
access_token: string;
refresh_token: string;
expires_at: number;
user: User;
}
interface AuthState {
session: Session | null;
user: User | null;
isLoading: boolean;
error: string | null;
}
```
### API Endpoints
- `POST /auth/signup` - Register new user
- `POST /auth/login` - Authenticate user
- `POST /auth/logout` - End user session
- `POST /auth/reset-password` - Request password reset
- `PUT /auth/reset-password` - Complete password reset
- `GET /auth/session` - Validate and refresh session
- `PUT /auth/profile` - Update user profile
### Technical Constraints
- Use Supabase Auth for authentication backend
- Implement client-side validation with Zod
- Store tokens securely using HTTP-only cookies
- Implement CSRF protection for auth operations
- Support social login with OAuth providers (Google, GitHub)
- Ensure compliance with GDPR for account data
## Implementation Progress
- [X] User Registration (Signup)
- [X] User Authentication (Login)
- [ ] Password Recovery Flow
- [ ] Session Management
- [ ] Role-Based Access Control
- [ ] Profile Management
- [ ] Social Login Integration
## Flow Diagram
```mermaid
flowchart TD
A[User visits site] --> B{Has account?}
B -->|No| C[Signup Flow]
B -->|Yes| D[Login Flow]
C --> C1[Complete signup form]
C1 --> C2{Validation OK?}
C2 -->|No| C1
C2 -->|Yes| C3[Create account]
C3 --> C4[Send confirmation]
C4 --> D1
D --> D1[Complete login form]
D1 --> D2{Valid credentials?}
D2 -->|No| D3{Too many attempts?}
D3 -->|No| D1
D3 -->|Yes| D4[Show CAPTCHA/rate limit]
D4 --> D1
D2 -->|Yes| D5[Create session]
D5 --> E[Redirect to Dashboard]
F[Forgot Password] --> F1[Request reset]
F1 --> F2[Send email]
F2 --> F3[Reset password form]
F3 --> F4{Valid new password?}
F4 -->|No| F3
F4 -->|Yes| F5[Update password]
F5 --> D1
G[Protected Routes] --> G1{Active session?}
G1 -->|Yes| G2{Has permission?}
G1 -->|No| D
G2 -->|Yes| G3[Show content]
G2 -->|No| G4[Show unauthorized]
```
Create and Update Documentation using AI
Traditionally, documentation has required manual effort. But with large language models (LLMs) and code-aware AI assistants, much of that work can now be automated.
One effective method is defining templates for generating MDC files. Templates specify the structure, required fields, and formatting rules—helping AI produce consistent, high-quality documentation.
Templates
Templates outline the format for each document type. They often include:
-
Examples: Good entries to emulate
-
AI Instructions: How to interpret the template
-
Update Triggers: Events like new routes or tool changes that require updates
# Feature Requirement Template
As a Product Owner, use this template when documenting new feature requirements.
# Features Location
How to add new feature to the project
1. Always place rule files in PROJECT_ROOT/.cursor/rules/features:
```
.cursor/rules/
├── your-feature-name.mdc
├── another-feature.mdc
└── ...
```
2. Follow the naming convention:
- Use kebab-case for filenames
- Always use .mdc extension
- Make names descriptive of the feature's purpose
3. Directory structure:
```
PROJECT_ROOT/
├── .cursor/
│ └── rules/
│ ├── features
| | └── your-feature.mdc
│ └── ...
└── ...
```
4. Never place feature files:
- In the project root
- In subdirectories outside .cursor/rules/features
- In any other location
## Structure
Each feature requirement document should include:
1. **Feature Overview**
- Title
- Brief description
2. **User Stories & Acceptance Criteria**
- Primary user story
- Gherkin scenarios (Given/When/Then)
- Edge cases
3. **Flow Diagram**
- Visual representation of the feature flow
- Decision points
- User interactions
4. **Technical Design and Decisions**
- Component architecture
- Data models
- API endpoints
- State management approach
- Technical constraints
- Dependencies on other features
- Performance considerations
5. **Implementation Progress**
- A checklist with the already implemented Scenarios
## Example
```
---
description: Short description of the feature's purpose
globs: optional/path/pattern/**/*
alwaysApply: false
---
# Recipe Search Feature
## Feature Overview
**Title:** Recipe Search Enhancement
**Description:** Add advanced filtering capabilities to the recipe search function allowing users to filter by ingredients, cooking time, and dietary restrictions.
## User Stories & Acceptance Criteria
### Primary User Story
As a user with dietary restrictions, I want to filter recipes by specific ingredients and dietary requirements so that I can find suitable recipes quickly.
### Gherkin Scenarios
**Scenario 1: Basic Ingredient Filtering**
Given I am on the recipe search page
When I enter "chicken" in the ingredient filter
And I click the "Search" button
Then I should see only recipes containing chicken
And the results should display the matching ingredient highlighted
**Scenario 2: Multiple Dietary Restrictions**
Given I am on the recipe search page
When I select "Gluten-free" from the dietary restrictions dropdown
And I also select "Dairy-free"
Then I should see only recipes that satisfy both dietary restrictions
And each recipe card should display the dietary tags
**Scenario 3: No Results Found**
Given I am on the recipe search page
When I enter a very specific combination of filters that has no matches
Then I should see a "No recipes found" message
And I should see suggestions for modifying my search
And a "Clear all filters" button should be visible
## Technical Design and Decisions
### Component Architecture
- **SearchForm**: Client component that contains filter UI elements
- **RecipeResults**: Server component that fetches and renders filtered recipes
- **RecipeCard**: Reusable component that displays recipe information with dietary tags
- **EmptyState**: Component showing no results found message and suggestions
### Data Models
```typescript
interface Recipe {
id: string;
title: string;
ingredients: string[];
cookingTime: number;
dietaryTags: string[];
image: string;
}
interface SearchFilters {
ingredients: string[];
dietaryRestrictions: string[];
maxCookingTime?: number;
}
```
### API Endpoints
- `GET /api/recipes/search` - Accepts query parameters for filtering recipes
- Parameters: ingredients, dietaryRestrictions, maxCookingTime
- Returns: Array of Recipe objects matching criteria
### State Management
- Use React Query for server state management
- Local component state for filter UI
- URL query parameters to make searches shareable and bookmarkable
### Technical Constraints
- Filter operations must complete within 500ms
- Support for mobile-responsive design
- Accessibility compliant (WCAG 2.1 AA)
### Dependencies
- Requires completed Recipe Database schema
- Uses shared UI component library for filter elements
## Implementation Progress
- [X] Scenario 1: Basic Ingredient Filtering
- [X] Scenario 2: Multiple Dietary Restrictions
- [ ] Scenario 3: No Results Found
## Flow Diagram
```mermaid
flowchart TD
A[User visits recipe page] --> B{Search or Filter?}
B -->|Search| C[Enter keywords]
B -->|Filter| D[Select filters]
C --> E[Submit search]
D --> E
E --> F{Results found?}
F -->|Yes| G[Display results]
F -->|No| H[Show no results message]
H --> I[Suggest filter modifications]
I --> J[Provide 'Clear filters' option]
G --> K[User selects recipe]
```
```
## AI Instructions
- Make the feature description concise but complete.
- Write Gherkin scenarios that cover the primary use cases and edge cases.
- Include a flow diagram that shows the user journey and system responses.
- Ensure all acceptance criteria are testable.
- Specify any technical constraints or requirements that might affect implementation.
- Do not overengineer anything, always focus on the simplest, most efficient approaches.
Roles
Instead of embedding update logic into every template, you can define roles to handle file updates. This keeps your templates clean and avoids including rule logic in the context when it’s not needed. A role typically includes:
-
Description – What the role represents
-
Responsibilities – What it’s accountable for
-
Actions – What it can do
-
Action Examples – How those actions apply in real situations
This modular approach enables AI to operate with intent, acting on roles rather than rewriting entire documents each time.
# Instructions
You are a multi-agent system coordinator, playing two roles in this environment: Planner and Executor. You will decide the next steps based on the current state in the TASKS.md file. Your goal is to complete the user's final requirements.
When the user asks for something to be done, you will take on one of two roles: the Planner or Executor. Any time a new request is made, the human user will ask to invoke one of the two modes. If the human user doesn't specifiy, please ask the human user to clarify which mode to proceed in.
The specific responsibilities and actions for each role are as follows:
# Role Descriptions
## Planner
**Responsibilities**
Perform high-level analysis, break down tasks, define success criteria, evaluate current progress. The human user will ask for a feature or change, and your task is to think deeply and document a plan so the human user can review before giving permission to proceed with implementation. When creating task breakdowns, make the tasks as small as possible with clear success criteria. Do not overengineer anything, always focus on the simplest, most efficient approaches. Analyze existing code to map the full scope of changes needed. Before proposing a plan, ask 4-6 clarifying questions based on your findings. Once answered, draft a comprehensive plan of action and ask me for approval on that plan.
**Actions**
Revise the TASKS.md file to update the plan accordingly.
### Task List Maintenance
1. Update the task list as you progress:
- Mark tasks as completed by changing `[ ]` to `[x]`
- Add new tasks as they are identified
- Move tasks between sections as appropriate
2. Keep "Relevant Files" section updated with:
- File paths that have been created or modified
- Brief descriptions of each file's purpose
- Status indicators (e.g., ✅) for completed components
3. Add implementation details:
- Architecture decisions
- Data flow descriptions
- Technical components needed
- Environment configuration
### AI Instructions
When working with task lists, the AI should:
1. Regularly update the task list file after implementing significant components
2. Mark completed tasks with [x] when finished
3. Add new tasks discovered during implementation
4. Maintain the "Relevant Files" section with accurate file paths and descriptions
5. Document implementation details, especially for complex features
6. When implementing tasks one by one, first check which task to implement next
7. After implementing a task, update the file to reflect progress
8. Break down features into tasks with specific success criteria
9. Clearly identify dependencies between tasks
### Example Task Update
When updating a task from "In Progress" to "Completed":
```markdown
## In Progress Tasks
- [ ] Implement database schema
- [ ] Create API endpoints for data access
## Completed Tasks
- [x] Set up project structure
- [x] Configure environment variables
```
Should become:
```markdown
## In Progress Tasks
- [ ] Create API endpoints for data access
## Completed Tasks
- [x] Set up project structure
- [x] Configure environment variables
- [x] Implement database schema
```
[...]
Pull Requests
Pull requests are a natural moment to update documentation—and an ideal place to involve AI. Every code change represents a knowledge change. Whether it’s a new route, a refactor, or a new dependency, there’s usually a documentation update to match. But in practice, these updates are often missed during code reviews.
By integrating AI into your pull request workflow, you can:
- Detect when documentation updates are needed
- Suggest or auto-generate the necessary changes
- Review and validate edits before merging
Challenges
While AI-assisted documentation and MDCs offer huge benefits, scaling them across multiple teams and projects is not easy.
Easy for One, Hard for Many
In small projects, maintaining MDCs is manageable. Everyone knows the codebase and communication is quick. But in a large organization with dozens of repos, things get complicated fast.
A small rule change—like renaming service folders—can trigger a cascade:
- Update the central template or rule
- Identify where other projects diverge
- Sync documentation, tooling, and structure across teams
The Cascade Problem
The biggest challenge is propagation:
- How do you notify every team of rule changes?
- How do you identify out-of-sync projects?
- How do you roll out updates without breaking things?
Without a system for this, standards fragment and projects drift apart—defeating the purpose of shared documentation in the first place.
In addition, successful implementation requires more than just automation—it demands trained engineers who can craft concise, effective prompts and evaluate their performance. Measuring prompt outcomes is critical to ensure that updates driven by AI do not degrade the quality or accuracy of the responses over time.
Conclusion
As software development grows more complex, documentation can no longer be an afterthought. In the AI-assisted era, it’s not just about helping humans stay aligned—it’s about giving machines the context they need to help us build better software.
Markdown Configuration files offer a practical way to bridge this gap. They serve both humans and machines, and when paired with AI tools, they become dynamic assets that evolve with the codebase.
Shifting documentation left—into the development workflow—transforms it from a static burden into a strategic advantage. It improves onboarding, enforces consistency, and boosts the reliability of AI assistance.
But scaling this approach takes intention. As your engineering organization grows, you’ll need processes and tools to ensure alignment, manage rule changes, and maintain documentation quality across projects. In addition, successful implementation requires more than just automation—it demands trained engineers who can craft concise, effective prompts and evaluate their performance. Measuring prompt outcomes is critical to ensure that updates driven by AI do not degrade the quality or accuracy of the responses over time.
Done right, AI won’t just write your docs. It’ll help scale your culture.
The future of documentation isn’t after development. It’s embedded, automated, and always in sync—with your team, your tools, and your code.