📑 Table des matières

Chain-of-Thought, Few-Shot, Tree-of-Thought : les techniques qui marchent

Prompting 🟡 Intermédiaire ⏱️ 11 min de lecture 📅 2026-02-24

Basic prompting techniques — assigning a role, providing context, defining a task — are an excellent starting point. But to get truly exceptional results with Claude or other LLMs, you need to master advanced techniques. Chain-of-Thought, Few-Shot, Tree-of-Thought: these methods, born from AI research, have become indispensable practical tools in 2025.

🧠 Understanding How LLMs Reason

Before diving into the techniques, let's understand why they work. Large language models (LLMs) don't "think" like we do. They predict the most likely next token. When you ask a complex question, the model can "jump" straight to an answer without going through intermediate steps — and get it wrong.

Advanced prompting techniques exploit a simple idea: by forcing the model to make its reasoning explicit, you significantly improve the quality of its responses. It's like the difference between a student who writes the answer to a math problem directly and one who shows all their work.

The Problem of Implicit Reasoning

Simple prompt:
"Marie has 3 apples. She gives half to Jean, then buys
5 apples. How many does she have?"

Often correct for this simple example, but for more complex
problems with multiple steps, the model can easily get it
wrong without explicit reasoning.

⛓️ Chain-of-Thought (CoT): Thinking Step by Step

The Principle

Chain-of-Thought, introduced by Wei et al. in 2022, involves asking the model to break down its reasoning into explicit steps before giving its final answer.

How to Use It

The simplest form is to add "Think step by step" to your prompt:

Prompt WITHOUT CoT:
"A store offers a 20% discount. The item costs $85.
With 20% sales tax, what is the final price?"

→ Risk of error: the model may apply calculations
  in the wrong order.

Prompt WITH CoT:
"A store offers a 20% discount. The item costs $85.
With 20% sales tax, what is the final price?

Reason step by step:
1. First calculate the price after discount
2. Then apply the sales tax
3. Give the final price"

Advanced CoT: Structuring the Steps

For complex problems, explicitly structure the reasoning steps:

"You are a business strategy consultant.

A client wants to launch a SaaS product. Here's the data:
- Target market: 50,000 SMBs
- Planned price: $49/month
- Estimated customer acquisition cost: $200
- Estimated monthly churn: 5%
- Initial marketing budget: $100,000

Analyze this project's viability by following these steps:

Step 1 — Calculate the addressable market
Step 2 — Project acquisitions with the marketing budget
Step 3 — Calculate the LTV (Lifetime Value) per customer
Step 4 — LTV/CAC ratio and interpretation
Step 5 — Revenue projection at 12 months
Step 6 — Verdict and recommendations

For each step, show your calculations and reasoning."

Zero-shot CoT vs Manual CoT

Type Description Example
Zero-shot CoT Simply add "step by step" "Solve this problem step by step"
Manual CoT Explicitly define the steps "Step 1: ... Step 2: ..."
Auto-CoT The model generates its own steps "Break this problem into sub-problems"

Manual CoT is generally the most reliable because you control the reasoning structure.

When to Use CoT

  • ✅ Mathematical and logical problems
  • ✅ Multi-factor analysis
  • ✅ Code debugging
  • ✅ Complex decision-making
  • ✅ Legal or medical reasoning
  • ❌ Not needed for simple tasks (translation, rephrasing)
  • ❌ Can slow down creative tasks

🎯 Few-Shot Prompting: Learning by Example

The Principle

Few-Shot prompting involves providing concrete examples of input/output pairs before asking your question. The model understands the pattern and applies it to your case.

Few-Shot Variants

Variant Number of examples Use case
Zero-shot 0 examples Simple tasks, clear instructions
One-shot 1 example Simple pattern to reproduce
Few-shot 2-5 examples Specific tasks, precise format
Many-shot 5+ examples Highly specialized tasks

Few-Shot in Practice

Example 1: Sentiment Classification

"Classify the sentiment of each customer review.

Examples:
Review: 'Fast delivery, product as described, I recommend!'
Sentiment: Positive
Category: Overall satisfaction

Review: 'Two weeks of waiting to receive a broken product.'
Sentiment: Negative
Category: Delivery + Product quality

Review: 'The product is decent for the price.'
Sentiment: Neutral
Category: Value for money

Now classify:
Review: 'Beautiful interface but crashes every 5 minutes.'
Sentiment: ?
Category: ?"

Example 2: Format Transformation

"Convert these product descriptions into spec sheets.

Example:
Description: 'Our 25L urban backpack is made of water-resistant
recycled nylon, with a padded 15-inch laptop compartment
and adjustable ergonomic straps.'

Spec sheet:
| Feature | Value |
|---------|-------|
| Type | Urban backpack |
| Capacity | 25L |
| Material | Recycled nylon |
| Water resistance | Yes |
| Laptop compartment | 15 inches, padded |
| Straps | Ergonomic, adjustable |

Now convert:
Description: 'This portable Bluetooth speaker offers 20h
of battery life, 360° sound thanks to its 4 speakers, an
IP67 certification and a featherweight 300g. Connects
via Bluetooth 5.3.'"

Example 3: Specific Writing Style

"Rewrite these technical phrases into accessible language
for a mainstream blog.

Example 1:
Technical: 'Microservices architecture decouples application
components to enable independent horizontal scaling.'
Blog: 'Instead of having one big monolithic piece of software,
you break it into small independent modules. Result: if one
part needs more power, you only scale that one.'

Example 2:
Technical: 'The model uses multi-head attention to weight
relevant tokens in the context window.'
Blog: 'The AI reads your text and automatically identifies
which words are most important to understand your request,
kind of like when you highlight key passages in a book.'

Now rewrite:
Technical: 'RLHF fine-tuning aligns LLM outputs with human
preferences via a reward model trained on pairwise comparisons.'"

Few-Shot Tips

  1. Choose diverse examples — covering edge cases
  2. Keep format consistency between examples
  3. Order from simple to complex — the model better understands progression
  4. 3-5 examples are usually enough — beyond that, returns diminish
  5. Test with different examples — some work better than others

🌳 Tree-of-Thought (ToT): Exploring Multiple Paths

The Principle

Tree-of-Thought, proposed by Yao et al. in 2023, goes further than CoT. Instead of following a single reasoning path, the model explores multiple approaches in parallel, evaluates each one, and selects the best.

It's like a chess player who considers several possible moves before choosing one.

How to Implement It

"You need to solve this planning problem:

Organize a 200-person tech conference in 3 months
with a $30,000 budget in Paris.

Explore 3 different approaches:

APPROACH A — Premium venue, basic content
[Develop this approach: venue, format, detailed budget]
Evaluate: advantages, risks, score out of 10

APPROACH B — Modest venue, high-profile speakers
[Develop this approach: venue, format, detailed budget]
Evaluate: advantages, risks, score out of 10

APPROACH C — Hybrid format (in-person + streaming)
[Develop this approach: venue, format, detailed budget]
Evaluate: advantages, risks, score out of 10

Summary: compare the 3 approaches in a table and
recommend the best one with justification."

ToT with Self-Evaluation

"Propose 3 different marketing strategies for launching
a meditation mobile app.

For each strategy:
1. Describe the approach in detail (channels, budget, timeline)
2. Identify 3 strengths and 3 weaknesses
3. Estimate the potential ROI
4. Assign a feasibility score /10

After exploring all 3 strategies:
- Eliminate the weakest one, explaining why
- Propose a hybrid strategy combining the best
  elements of the remaining two
- Detail the final action plan"

When to Use ToT

Situation CoT enough? ToT necessary?
Math calculation
Strategic choice ⚠️
Single problem solving
Complex planning ⚠️
Creativity / brainstorming
Arbitrating between options ⚠️

📊 Overall Comparison Table

Technique Complexity Tokens used Best for Reliability
Zero-shot Low Simple tasks Variable
Few-shot ⭐⭐ Medium Precise format, classification High
CoT ⭐⭐ Medium-high Logical reasoning High
Few-shot + CoT ⭐⭐⭐ High Complex problems with patterns Very high
ToT ⭐⭐⭐⭐ Very high Strategic decisions Very high

Cost vs Quality

A crucial point in 2025: these techniques consume more tokens (and therefore cost more). Use OpenRouter to optimize your costs by automatically routing to the most suitable model.

Technique Average tokens (prompt) Relative cost
Zero-shot 50-200 1x
Few-shot (3 examples) 300-800 3-4x
Detailed CoT 200-500 2-3x
ToT (3 branches) 500-1500 5-8x

🔬 Bonus Techniques: Beyond the Classics

Self-Consistency

Run the same CoT prompt multiple times and take the majority answer. Particularly useful for math problems.

"Solve this problem 3 times with different approaches.
If all 3 answers agree, that's the right one.
If they diverge, analyze why and determine the most reliable.

Problem: [your problem]

Approach 1: algebraic solution
Approach 2: estimation/verification solution
Approach 3: decomposition solution"

Prompt Chaining

Rather than a single mega-prompt, break it down into a series of prompts that chain together. This is exactly what OpenClaw lets you do automatically.

Prompt 1: "Analyze this client brief and extract the 5 key points"
→ Result stored

Prompt 2: "From these 5 points, generate 3 creative proposals"
→ Result stored

Prompt 3: "Evaluate these 3 proposals and develop the best one
into a detailed plan"
→ Final result

ReAct (Reasoning + Acting)

Combines reasoning and actions. The model thinks, acts, observes the result, then thinks again.

"To answer this question, alternate between reflection and action:

Question: [your complex question]

Thought 1: What do I need to find/understand?
Action 1: [What you would do to get the info]
Observation 1: [What you would get]
Thought 2: What does this observation tell me?
Action 2: [Next step]
...
Final answer: [Summary]"

🎯 Quick Choice Guide

Use this decision tree:

Is your task simple (translation, short summary)?
   YES: Zero-shot is enough
   NO: 

Do you have a precise format to reproduce?
   YES: Few-shot (2-5 examples)
   NO: 

Does the task require logical reasoning?
   YES: Chain-of-Thought
   NO: 

Do you need to compare multiple options/strategies?
   YES: Tree-of-Thought
   NO: 

Is the task multi-step with dependencies?
   YES: Prompt Chaining (OpenClaw)
   NO: Combine Few-shot + CoT

💡 Combining Techniques

The real power comes from combining them. Here's an example mixing Few-shot + CoT + self-evaluation:

"You are a UX design expert.

Here's how I analyze interface problems:

Example:
Problem: 'Users can't find the signup button'
Step 1 — Diagnosis: The button is at the bottom of the page,
color similar to the background, no contrast
Step 2 — UX principle violated: Visibility (Nielsen #1)
Step 3 — Solution: Button at the top, contrasting color, larger size
Evaluation: Estimated impact → +40% visibility, effort → low

Now analyze this problem using the same method:
Problem: 'Cart abandonment rate is 78%'

Propose 3 solutions, evaluate each, and recommend the best one."

Advanced prompting techniques aren't abstract theory — they're concrete tools that produce measurable results. Start with CoT for your reasoning tasks, add Few-shot when you have a format to reproduce, and move to ToT for your strategic decisions.

With practice, these techniques will become as natural as writing an email. And with tools like OpenClaw to orchestrate prompt chains, you can automate complex workflows that combine all these approaches.