From individual AI to organizational AI: repo rules as the missing layer
Every development team asks the same question in 2026: which AI coding tool should we use?
It is the wrong question.
The tool is ephemeral
Roo Code — a popular VS Code extension forked from Cline, used by thousands of developers — shut down on May 15, 2026. Two weeks' notice. The team pivoted to a cloud-based product and abandoned the extension.
Every developer using Roo Code had to migrate overnight. Their prompting habits, their muscle memory, their workflow — all tied to a tool that no longer exists.
This is not an anomaly. It is the pattern. AI coding tools in 2026 are a fast-moving market where acquisitions, pivots, and shutdowns happen quarterly. Cursor may change its pricing model. GitHub Copilot will restructure its tiers. A new tool will emerge that makes today's favourite obsolete.
If your AI capability lives inside the tool, it dies with the tool.
What survives a tool switch
One thing survived the Roo Code shutdown without modification: the rule files in the repository.
Every serious AI coding tool in 2026 reads project-level configuration from files committed to git:
| Tool | Rule file | Scope |
|---|---|---|
| Claude Code | CLAUDE.md + .claude/rules/ |
Project + workspace + user |
| Kilo Code | .kilocode/rules/ |
Project |
| Cline | .clinerules/ |
Project |
| Cursor | .cursorrules + AGENTS.md |
Project |
| Codex CLI | AGENTS.md |
Project |
| GitHub Copilot | .github/copilot-instructions.md |
Org + repo |
These files are plain text, version-controlled, and reviewed like code. They travel with the repository, not with the tool. When a developer switches from Roo Code to Kilo Code — or from Cursor to Claude Code — the rules stay.
This is not a technical detail. It is the architectural insight that separates organizations with lasting AI capability from organizations with individuals who happen to use AI.
What goes in the rules
Rule files are not prompt libraries. They are not "tips for better AI output." They encode how your organization works:
Architecture standards. "This project uses modular monolith architecture with five layers: presentation, service, repository, model, data. All new code must follow this structure."
Technology constraints. "We use Laravel 11 with PHP 8.3. Do not suggest solutions in other frameworks. Do not introduce new dependencies without approval."
Quality requirements. "All database queries must use the repository layer. No raw SQL in controllers. All public methods require PHPDoc."
Naming conventions. "Database tables use snake_case plural. Models use PascalCase singular. API endpoints use kebab-case."
Domain terminology. "A 'journey' is a scheduled route. A 'trip' is a single execution of a journey. Do not conflate these terms."
These rules are not suggestions. They are constraints that prevent AI from generating code that compiles but violates your standards — the kind of code that creates technical debt invisibly.
The evidence: 90% of the problems
A CTO we work with reviewed every commit in his team's CI/CD pipeline for eight months. His finding: 90% of the problematic code was AI-generated.
Not because the tools were bad. Because the developers used AI without shared standards. Each developer prompted in their own way, with their own assumptions, producing code that looked correct but violated architectural patterns, ignored layer separation, or introduced inconsistencies that only showed up at integration.
The CTO had built comprehensive development guidelines — architecture standards, use case templates, layer definitions. He distributed them to the team. Almost nobody followed them.
Not because the guidelines were wrong. Because there was no mechanism to enforce them at the point of code generation. The guidelines lived in a document. The code generation happened in a tool that had never read that document.
Rule files solve this. When the architecture standard lives in CLAUDE.md or .kilocode/rules/, the AI reads it before generating a single line. The standard is not a document the developer should have read — it is a constraint the tool cannot ignore.
AGENTS.md: the emerging cross-tool standard
In 2026, a de facto standard is emerging: AGENTS.md. Multiple tools — Codex CLI, Cursor, and increasingly others — read this file for project-level instructions.
The convergence makes sense. Developers switch tools. Teams use different tools for different tasks. A project might use Cursor for frontend work and Claude Code for backend refactoring. If the rules live in a tool-specific format, they must be duplicated and maintained separately.
AGENTS.md is not yet universal, but the direction is clear: project rules will converge on a small number of formats that work across tools. Organizations that start encoding their standards now — in whatever format their current tool supports — are positioned to migrate those rules forward as the ecosystem matures.
Individual AI vs organizational AI
This is the distinction that matters:
Individual AI is a developer using a tool with personal prompting habits. The output quality depends entirely on that person's skill, context awareness, and discipline. When they leave the team, their AI capability leaves with them. When the tool changes, their workflow breaks.
Organizational AI is a team using tools constrained by shared rules committed to the repository. The output quality has a floor — no AI-generated code can violate the encoded standards, regardless of which developer prompted it. When someone leaves, the rules stay. When the tool changes, the rules migrate.
The gap between these two states is not a training problem. It is a governance problem. And the solution is not more workshops on prompting — it is rule files in git.
The governance layer nobody built
Most organizations in 2026 have invested in AI tools. Licenses purchased. Accounts provisioned. Maybe a workshop or two. Some have measured adoption through token spend or usage dashboards.
Almost none have built the governance layer: the rule files, reviewed and maintained like code, that encode how AI should behave in their specific context.
This is the missing layer. Without it: - Every developer reinvents standards in every prompt - Architectural decisions are made by the model, not the team - Code reviews catch AI-generated violations after the fact instead of preventing them - Tool switches reset the team to zero - New hires have no encoded context to work with
With it: - Standards are applied automatically at generation time - Architectural patterns are enforced before code review - Tool switches preserve organizational knowledge - New developers inherit the team's accumulated AI governance on day one - The CTO reviews rules instead of reviewing every commit
Starting: three files, one week
Building organizational AI governance does not require a transformation program. It requires three files and one week of attention:
Day 1-2: Write the architecture file. Document your project structure, layer definitions, technology constraints, and naming conventions. Put it in whatever rule format your current tool reads. If you use Claude Code, write CLAUDE.md. If you use Kilo Code, create .kilocode/rules/architecture.md. If you use multiple tools, write AGENTS.md.
Day 3-4: Add domain terminology. Define the terms that are specific to your business. AI models are trained on generic language — they will use "user" when you mean "passenger," "order" when you mean "booking," "message" when you mean "notification." A terminology file prevents this.
Day 5: Review the first week's output. With rules in place, review the AI-generated code from the week. Where did the rules help? Where did they fail? What is missing? Update the files.
This is not a one-time setup. Rule files are living documents — they evolve with every code review that reveals a gap. But the investment is small: a few hours of writing, committed to git, reviewed like any other code change.
The return is organizational AI capability that persists across tool changes, team changes, and the relentless churn of the AI tooling market.
The question to ask
Stop asking "which AI tool should we use?" Every tool in 2026 is capable enough. The differences are in pricing, model access, and interface preferences — important, but not strategic.
Start asking: "What rules does our repository contain?"
If the answer is none, you do not have organizational AI capability. You have individuals who happen to use AI tools. The output is as inconsistent as the individuals, the knowledge as fragile as their tenure, and the capability as durable as the current tool's roadmap.
The tool is the commodity. The rules are the asset.
Tomas Andre reflects on why the tool question is always wrong — tomasandre.se/insights.