📑 Table des matières

Créer une bibliothèque de prompts réutilisables

Prompting 🔴 Avancé ⏱️ 12 min de lecture 📅 2026-02-24

You've spent hours perfecting a prompt. It delivers perfect results. Three weeks later, you need it again... and can't find it. This scenario is unfortunately universal. In 2025, with the proliferation of AI use cases in businesses, professional prompt management is no longer optional—it's critical infrastructure. This guide shows you how to create a reusable, versioned, and scalable prompt library.

🎯 Why a Prompt Library?

The Problem

Most professionals who use Claude or other LLMs daily have the same chaotic workflow:

  1. Write a prompt in the chat
  2. Get a good result
  3. Forget the exact prompt
  4. Start from scratch next time
  5. Get a different (often worse) result

It's like a developer writing code without ever saving it. Absurd, right?

Concrete Benefits

Without Library With Library
15-30 min to write a prompt 2 min to load a template
Inconsistent results Reproducible quality
Knowledge in one person's head Shareable team knowledge
Impossible to measure improvement Version history and performance tracking
Duplicated efforts Reuse and composition

Estimated ROI: For a team of 5 people using AI daily, a well-managed library saves 10-15h/week.

📁 Library Architecture

prompts/
├── README.md                 # Library documentation
├── _templates/               # Reusable base templates
│   ├── base-article.md
│   ├── base-email.md
│   └── base-analysis.md
├── marketing/
│   ├── prospecting-email.md
│   ├── followup-email.md
│   ├── landing-page.md
│   └── social-media/
│       ├── linkedin-post.md
│       └── twitter-thread.md
├── content/
│   ├── blog-seo.md
│   ├── newsletter.md
│   └── case-study.md
├── code/
│   ├── code-review.md
│   ├── refactoring.md
│   └── api-design.md
├── analysis/
│   ├── data-analysis.md
│   ├── competitor-analysis.md
│   └── market-research.md
└── system-prompts/
    ├── support-agent.md
    ├── seo-writer.md
    └── code-assistant.md

Standard Prompt File Format

Each prompt in your library should follow a standard format:

# B2B SaaS Prospecting Email

## Metadata
- **ID**: MKT-001
- **Version**: 2.3
- **Category**: Marketing > Email
- **Author**: Nicolas
- **Last Modified**: 2025-02-20
- **Tested Models**: Claude 3.5 Sonnet, GPT-4
- **Quality Score**: 8.5/10
- **Tags**: email, prospecting, B2B, SaaS

## Variables
| Variable | Description | Example |
|----------|-------------|---------|
| [[PROSPECT_NAME]] | Prospect's name | John Doe |
| [[COMPANY]] | Prospect's company | TechCorp |
| [[POSITION]] | Prospect's position | CTO |
| [[PAIN_POINT]] | Identified problem | Infrastructure costs |
| [[PRODUCT]] | Our product | CloudOptim |
| [[HOOK]] | Personalized hook | Your LinkedIn post about... |

## System Prompt
You are a senior B2B SaaS salesperson. You write prospecting emails that achieve >30% open rates and >10% response rates.

## Prompt
Write a prospecting email for [[PROSPECT_NAME]], [[POSITION]] at [[COMPANY]].

Context:
- Identified pain point: [[PAIN_POINT]]
- Our solution: [[PRODUCT]]
- Hook: [[HOOK]]

Constraints:
- Max 5 body lines
- Subject: max 6 words, curiosity without clickbait
- No "I'm reaching out", no "don't hesitate"
- CTA: one simple question
- Tone: professional but human

## Expected Output Example
[A concrete example of the ideal result]

## Version History
- v2.3 (2025-02-20): Added HOOK variable for personalization
- v2.2 (2025-02-10): Reduced to 5 lines (better response rate)
- v2.1 (2025-01-28): Added anti-cliché constraint
- v2.0 (2025-01-15): Complete redesign, AIDA structure
- v1.0 (2024-12-01): Initial version

🔧 Variable System

Variable Types

Variables make your prompts dynamic and reusable:

Type Syntax Usage
Simple text [[NAME]] Names, titles, keywords
Long text [[CONTEXT]] Descriptions, briefs
Choice {{TONE:formal|casual|technical}} Predefined options
Number {{WORD_COUNT:500}} Values with defaults
Boolean {{INCLUDE_FAQ:yes}} Section activation
List {{KEY_POINTS[]}} Multiple items

Variables in Practice

# Template: SEO Blog Article

You are a senior SEO web writer.

Write an article for [[COMPANY]]'s blog.

**Topic**: [[TOPIC]]
**Main keyword**: [[MAIN_KEYWORD]]
**Secondary keywords**: {{SECONDARY_KEYWORDS[]}}
**Audience**: [[AUDIENCE]]
**Length**: {{WORD_COUNT:1500}} words
**Tone**: {{TONE:professional|conversational|expert}}

Structure:
- Optimized H1 title (max 60 characters)
- Introduction with reader's problem (150 words)
{{#IF INCLUDE_FAQ}}
- FAQ (5 schema.org friendly questions)
{{/IF}}
- Meta title (≤60 characters)
- Meta description (150-160 characters)

Automating Replacement

With Python:

import re

def fill_prompt(template: str, variables: dict) -> str:
    """Replace variables in a prompt template."""
    result = template
    for key, value in variables.items():
        if isinstance(value, list):
            value = ", ".join(value)
        result = result.replace(f"{{{[[key]]}}}", str(value))

    # Handle default values {{VAR:default}}
    result = re.sub(
        r'\{\{(\w+):([^}]+)\}\}',
        lambda m: variables.get(m.group(1), m.group(2)),
        result
    )
    return result

# Usage
template = open("prompts/content/blog-seo.md").read()
prompt = fill_prompt(template, {
    "COMPANY": "TechFlow",
    "TOPIC": "The 10 SaaS Pricing Mistakes",
    "MAIN_KEYWORD": "SaaS pricing",
    "SECONDARY_KEYWORDS": ["pricing", "freemium model", "pricing strategy"],
    "AUDIENCE": "SaaS startup founders",
    "TONE": "conversational"
})

With OpenClaw, this variable replacement can be automated in complete workflows, chaining multiple prompt templates.

🏷️ Categorization System

By Domain

marketing/       Everything related to acquisition and conversion
content/         Content writing (blog, social, newsletter)
code/            Development, review, architecture
analysis/        Data analysis, market, competition
operations/      Process, documentation, project management
sales/           Prospecting, qualification, closing
hr/              Recruitment, onboarding, evaluation
system-prompts/  System prompts for agents/chatbots

By Complexity Level

Level Description Example
🟢 Simple Standalone prompt, no variables Text summary
🟡 Intermediate Variables, some constraints Templated email
🔴 Advanced Multi-prompt, conditions, chaining Content pipeline

By Target Model

Some prompts work better on certain models. Indicate this in metadata:

## Compatibility
- Claude 3.5 Sonnet: ✅ Tested, score 9/10
- GPT-4 Turbo: ✅ Tested, score 7/10 (tends to be too long)
- Llama 3 70B: ⚠️ Works but loses constraints
- Mistral Large: ✅ Tested, score 8/10

Use OpenRouter to easily test your prompts on all these models.

📊 Versioning and Iteration

Git for Prompts

Treat your prompts like code. Use Git:

# Initialize repo
git init prompts-library
cd prompts-library

# Initial structure
mkdir -p {marketing,content,code,analysis,system-prompts,_templates}
touch README.md

# Initial commit
git add .
git commit -m "init: prompt library structure"

# New prompt version
git commit -m "feat(marketing): prospecting-email v2.3 - added HOOK variable"

# Tag for stable versions
git tag -a v1.0 -m "Version 1.0 - 25 tested and validated prompts"

Commit Naming Convention

feat(category): description     → New prompt or feature
fix(category): description      → Prompt correction
perf(category): description     → Performance improvement
test(category): description     → Added tests/examples
docs(category): description     → Documentation

Versioning Workflow

                    ┌──────────┐
                    │  Draft   │
                    │ (draft)  │
                    └────┬─────┘
                         │ Test with 10 cases
                         ▼
                    ┌──────────┐
                    │   Beta   │
                    │ (v0.x)   │
                    └────┬─────┘
                         │ Peer validation
                         ▼
                    ┌──────────┐
                    │ Stable   │
                    │ (v1.0)   │
                    └────┬─────┘
                         │ Real-world feedback
                         ▼
                    ┌──────────┐
                    │ Iteration│
                    │ (v1.x)   │
                    └──────────┘

A/B Testing Prompts

import random
import json
from datetime import datetime

class PromptABTest:
    def __init__(self, name: str, variants: dict):
        self.name = name
        self.variants = variants  # {"A": prompt_a, "B": prompt_b}
        self.results = {"A": [], "B": []}

    def get_variant(self) -> tuple:
        """Return a random variant."""
        variant = random.choice(["A", "B"])
        return variant, self.variants[variant]

    def log_result(self, variant: str, score: int, notes: str = ""):
        """Record a test result."""
        self.results[variant].append({
            "score": score,
            "notes": notes,
            "timestamp": datetime.now().isoformat()
        })

    def get_winner(self) -> str:
        """Determine the best variant."""
        avg_a = sum(r["score"] for r in self.results["A"]) / len(self.results["A"]) if self.results["A"] else 0
        avg_b = sum(r["score"] for r in self.results["B"]) / len(self.results["B"]) if self.results["B"] else 0
        return "A" if avg_a >= avg_b else "B", avg_a, avg_b

🔗 Prompt Composition and Inheritance

Base Templates (Inheritance)

Create base templates that specific prompts inherit from:

# _templates/base-article.md (parent template)

You are an expert writer for [[PUBLICATION]].

Style:
- Short sentences (max 20 words)
- Active voice
- Concrete examples
- No unexplained jargon

Standard structure:
- Optimized title
- Introduction (hook + promise)
- Body structured in H2/H3
- Conclusion with CTA

[[SPECIFIC_INSTRUCTIONS]]
# content/blog-tech.md (inherits from base-article)

{{INHERITS_FROM: _templates/base-article.md}}

PUBLICATION: TechFlow Blog
SPECIFIC_INSTRUCTIONS:
- Include code blocks when relevant
- Add "In summary" section with 3 bullet points
- Target junior-mid developer
- Link to official documentation

Composition (Combining Prompts)

```markdown

Composite Prompt: Complete Monthly Report

Step 1 — Data Analysis

{{INCLUDE: analysis/data-analysis.md}}
Variables: DATASET=PH_MONTHLY_DATA_PH, PERIOD=PH_MONTH_PH

{{INCLUDE: analysis/trend-identification.md}}
Variables: ANALYZED_DATA=PH_STEP_1_RESULT_PH

Step 3 — Recommendations

{{INCLUDE: analysis/recommendations.md}}
Variables: INSIGHTS=PH_STEP_2_RESULT_PH, BUDGET=PH_BUDGET_PH

Step 4 — Report Writing

{{INCLUDE: content/report-tem