← All articles Methodology

Socratic questions: the 2,400-year-old AI development method

Socratic questions: the 2,400-year-old AI development method

Socratic questions: the 2,400-year-old AI development method

Most developers interact with AI the same way they'd write a ticket: "Do X. Build Y. Fix Z." It works. You get output. But you're leaving most of the value on the table.

The developers I train who get the best results — the ones who hit 4-5x productivity — do something different. They ask questions. Not vague questions. Deliberate, open-ended questions that force the AI to bring context they wouldn't have thought to specify. The same technique, it turns out, that transforms how teams think about problems.

It has a name. It's 2,400 years old. And it's the most underrated technique in AI-assisted development.

What Socratic questioning actually is

Socrates didn't teach by lecturing. He taught by asking. Open questions — not leading ones — designed to help the other person arrive at insight through their own reasoning. Not "don't you think X is better?" but "what would happen if we approached it this way?" Not giving answers, but creating the conditions for better thinking.

The method has three characteristics that matter for AI development:

It's inquiry, not instruction. You ask what the best approach is. You don't dictate it.

It forces reflection. The person — or system — being questioned has to evaluate, connect, and reason. Not just execute.

It builds understanding, not dependency. The insight belongs to the person who arrived at it, not the person who asked the question.

"Istället för att säga vad du ska göra — i planeringsfasen — ställer du sokratiska frågor."

Applied to AI: questions beat instructions

Here's what the difference looks like in practice.

Instruction mode: "Write a Python function that validates email addresses using regex, handles edge cases, and returns a boolean."

Socratic mode: "I need to validate email input from a web form. What are the main approaches, and what are the tradeoffs between strict regex validation versus a simpler check-then-verify flow?"

The instruction produces exactly what you asked for — nothing more. The Socratic approach produces something you wouldn't have specified: context about tradeoffs, alternative approaches, edge cases you hadn't considered. The AI brings its full training to bear because you gave it room to reason, not just execute.

This isn't about being polite to the AI. It's about input curation. What you put into the context window determines what comes out. And a well-formed question puts more useful context into play than a well-formed instruction.

In the planning phase — before any code is written — this is where the real leverage is. Instead of "build me X," you ask:

  • "What's the best architecture for this given these constraints?"
  • "What are the failure modes I should worry about?"
  • "If you were reviewing this approach, what would you challenge?"

Each question forces the AI to activate different parts of its training. You're not narrowing the output — you're expanding it. And then you select, validate, and decide. That's sharper thinking in practice.

Applied to people: the same method, the same result

Here's where it gets interesting. The exact same technique that makes AI interaction better also transforms how teams develop.

I've seen organizations where developers execute tickets but never ask "what does the customer actually need?" The code works. The tests pass. But nobody has thought about whether the thing they built matters. Nobody has ont i magen — nobody feels the weight of the customer's problem in their gut.

The root cause, in every case I've encountered, is an instruction-based culture. Specs are written. Tickets are filed. Developers execute. The system is efficient at producing output. It's terrible at producing ownership.

The Socratic alternative is simple but uncomfortable: instead of telling someone what to build, you ask them what the system needs. Instead of writing the spec, you ask them to write it — and then ask questions about their choices. Instead of correcting their approach, you ask "what happens if a customer encounters this?"

"Genom att ställa sokratiska frågor så får hon reflektera och fundera. Det gör henne klokare än att jag bara säger till henne."

This is slower. Significantly slower, at first. A developer who has spent fifteen years receiving instructions needs time to rebuild the muscle of independent reasoning. An education system that rewards memorization produces people who are brilliant at executing defined tasks and lost when asked to define the task themselves.

But the investment compounds. A team that thinks in questions — "why are we building this?" "what does the user actually experience?" "how does this connect to the business?" — is a team that builds ownership. And ownership is the prerequisite for everything that follows.

Why this connects to sharper thinking

AI doesn't reduce cognitive load. It transforms it. You stop writing code and start defining what code should exist. You stop implementing and start validating. You stop building and start reviewing.

Socratic questioning is the mechanism that makes this transformation work. When you ask the AI "what's the best approach?" instead of telling it what to do, you're forced to evaluate the response. You need to understand the tradeoffs. You need to decide. That's harder than writing the code yourself — it demands a different kind of thinking.

The same applies to leading a team. When you tell someone what to build, the cognitive load is yours. When you ask them what should be built and why, the cognitive load transfers — and with it, the understanding and the ownership.

This is why experienced developers get more out of AI than juniors. It's not that seniors write better prompts. It's that seniors ask better questions. They've spent decades building intuition about what matters, what breaks, what customers actually need. That intuition translates directly into better inquiry — both with AI and with people.

"Senior + AI = extraordinary results. Not because of better prompts. Because of better questions."

The instruction trap

There's a pattern I see in organizations struggling with AI adoption. The team treats AI exactly like they treat a junior developer: give precise instructions, expect precise execution, review the output. It works for simple tasks. It fails completely for anything complex.

The failure isn't technical. The AI can handle complexity. The failure is that instruction-based interaction produces instruction-shaped output — narrow, literal, exactly what was asked and nothing more. The AI becomes a very fast typist instead of a thinking partner.

The same pattern shows up in team dynamics. Organizations that run on instructions — detailed specs, rigid tickets, no room for interpretation — produce teams that execute but don't think. They deliver what was specified, even when what was specified is wrong. Nobody raises their hand because nobody was asked a question.

Both failures have the same root: the culture of instruction kills the habit of inquiry. And without inquiry, there's no ownership, no reflection, no growth.

The compound effect

Socratic questioning creates a feedback loop that instruction never can:

With AI: Better questions → richer responses → better understanding of the problem → even better questions. Each cycle deepens your grasp of the domain and improves the quality of what the AI produces.

With people: Better questions → deeper reflection → growing ownership → independent thinking → people who ask their own questions. Each cycle builds capability that stays when you leave.

Between both: The developer who learns to ask Socratic questions of AI starts asking them of themselves. "Why am I building this? What would the customer say? What am I missing?" That internal inquiry is the highest form of the practice — and it produces developers who don't need to be managed, because they manage themselves.

This is what we mean when we say AI demands sharper thinking. Not that you need to be smarter. That you need to think differently — and the oldest method in philosophy turns out to be the most modern technique in AI development.

In practice

If you're working with AI today, try this for a week:

Before any implementation task, ask the AI three questions instead of giving one instruction. "What approaches would you consider for this?" "What are the risks I should think about?" "If you were reviewing this solution, what would you challenge?"

If you're leading a team, try the same: replace one instruction per day with one question. Not a leading question — a genuine one. "What do you think the customer needs here?" "How would you approach this if you had full ownership?" "What would you change about our current process?"

The first few days will feel slower. The first few weeks will feel frustrating — people aren't used to being asked, and AI responses to questions are longer than responses to instructions. But by the end of the month, you'll notice something: the quality of thinking around you has changed. Not because you taught anyone anything. Because you asked the right questions.

"Ansvar kan inte delegeras. Men det kan odlas — genom rätt frågor."


Based on real experience from AI training workshops and organizational transformation. The Socratic method isn't a technique we invented — it's one we rediscovered when we stopped telling AI what to do and started asking it what to think.