← All articles Technical

How Claude Code actually works: the agentic loop, CLAUDE.md, and what engineers need to understand

How Claude Code actually works: the agentic loop, CLAUDE.md, and what engineers need to understand

How Claude Code actually works: the agentic loop, CLAUDE.md, and what engineers need to understand

Most engineers who pick up Claude Code use it the same way they used GitHub Copilot: type a prompt, get output, iterate. That works. It also leaves most of the tool's capability on the table.

Claude Code is not a completion engine. It is an agentic loop — a system that reads, decides, acts, observes the result, and repeats. Understanding the loop changes what you ask of it, how you configure it, and where the accountability sits.

The agentic loop

Every Claude Code session runs the same core cycle:

  1. Receive a task or message
  2. Decide which tool to use
  3. Execute the tool
  4. Observe the output
  5. Decide next step — and repeat until done

The tools Claude Code can invoke: read files, write files, run bash commands, search the web, call external APIs via MCP connections, and spawn subagents to work in parallel. Each tool call is visible in the terminal as it happens. This is not a black box — it is a transparent sequence of decisions.

The loop continues until Claude decides the task is complete, until context runs out, or until the user interrupts. On a large refactoring task, this might mean dozens of file reads, test runs, and edits before surfacing a result.

The practical implication: vague prompts that worked for autocomplete tools do not work well here. The loop will execute on whatever interpretation it forms of an ambiguous instruction. Precision in the task description directly determines what the loop does.

CLAUDE.md: where your judgment lives

The most important configuration file in Claude Code is CLAUDE.md.

Place it in a repository root and Claude reads it at the start of every session in that repo. It is not documentation — it is instruction. What to always do. What to never do. Which conventions matter in this codebase. How the team has decided to handle certain patterns.

This is the same principle as REVIEW.md in the Code Review feature — human judgment encoded as machine-readable instruction. The quality of Claude Code's output in a specific codebase is bounded by how well CLAUDE.md captures that codebase's rules.

Organizations that ship fast with Claude Code typically invest heavily in their CLAUDE.md files. Teams that get inconsistent results often have no CLAUDE.md, or one written once and never maintained. The file is not a one-time setup — it is a living document that grows with the codebase and the team's accumulated knowledge.

What belongs in CLAUDE.md: - Testing conventions and which test runner to use - Code style rules that differ from language defaults - Which directories to avoid touching - How to handle migrations, secrets, environment variables - Architectural decisions that are already made

Without CLAUDE.md, every session starts from zero. With it, the agent inherits the team's accumulated judgment.

Skills and slash commands

Claude Code supports Skills — specialized behaviors triggered by /command syntax. Skills are markdown files that expand into structured prompts when invoked, giving the agent a specific frame for a specific task type.

A team can define a /deploy skill that includes the exact steps, checks, and approvals required. A /review skill that enforces the team's standard. A /debug skill that follows a specific investigation pattern.

Skills encode process as executable instruction. They are the difference between asking "can you review this PR" and invoking a consistently structured review workflow that the whole team uses.

Subagents and parallel execution

Claude Code can spawn subagents — separate Claude instances that execute tasks in parallel and return results to the parent session.

This is what enables complex workflows: a parent agent defines the overall task, breaks it into independent subtasks, and dispatches subagents to handle them simultaneously. Results merge back into the parent context.

The Code Review feature is a concrete example of this pattern: a fleet of specialized agents runs against a pull request in parallel, each targeting one class of problem, with results consolidated and filtered before reaching the engineer.

The same architecture is available in any Claude Code session. When a task has independent parallel workstreams — searching multiple sources, running multiple test suites, analyzing multiple files — subagents compress calendar time.

MCP: extending what Claude can reach

Model Context Protocol (MCP) is the integration layer. Claude Code connects to external tools and services via MCP servers — databases, APIs, internal systems, specialized search indexes.

An MCP connection makes external data or capability available inside the agentic loop as a tool Claude can call, observe, and reason about. The agent does not know or care whether the data came from a local file or an internal API — it treats both as information it can act on.

This is how Claude Code gets integrated into production workflows rather than living as a standalone coding assistant.

Hooks: process control at the edges

Hooks are shell commands that fire at specific moments in the Claude Code lifecycle: before a tool runs, after a tool completes, on session start or end.

They are the mechanism for policy enforcement. A hook that blocks any file write outside specified directories. A hook that runs a linter before any commit. A hook that logs all bash executions to an audit trail.

This is where organizational control lives. Hooks let engineering teams define hard boundaries on what Claude Code can do in their environment, independent of what any individual prompt requests.

What this changes for engineering teams

Most of the leverage in Claude Code is in configuration, not in prompting.

A well-maintained CLAUDE.md means every session in that repo starts with the team's full context. Skills mean process knowledge is encoded and reusable rather than re-explained each session. Hooks mean policies are enforced at the tool level, not trusted to each engineer's prompting discipline.

The agentic loop runs on what it is given. Teams that give it structured context, defined processes, and enforced boundaries get structured, reliable results. Teams that treat it as an intelligent autocomplete get autocomplete-level results from a much more capable system.

The tool is not the bottleneck. The configuration is.


Sources: How Claude Code works · Features overview