Chapter 12: Prompt Engineering & In-Context Learning¶
Design inputs that get reliable, useful behavior from LLMs—prompt anatomy, zero/few-shot, chain-of-thought, ReAct, structured outputs, evaluation, injection defenses, and a versioned prompt registry.
Metadata¶
| Field | Value |
|---|---|
| Track | Practitioner |
| Time | 6 hours |
| Prerequisites | Chapter 11 (LLMs & Transformers) and Chapter 10 (NLP Basics) |
Learning Objectives¶
- Decompose a prompt into instruction, context, input, and output spec
- Apply zero-shot, few-shot, and in-context learning patterns
- Use chain-of-thought, self-consistency, ReAct, and tool/function calling
- Produce structured outputs with Pydantic schemas and safe parsers
- Evaluate prompts with golden datasets, graders, and A/B tests with CIs
- Defend against prompt injection and ship versioned prompts to production
What's Included¶
Notebooks¶
| Notebook | Description |
|---|---|
01_prompt_basics.ipynb | Prompt anatomy, zero/few-shot, structured outputs, sensitivity to wording |
02_advanced_prompting.ipynb | Chain-of-thought, self-consistency, ReAct, tool calling, JSON mode |
03_prompt_systems.ipynb | Evaluation, A/B testing, injection defenses, registry, observability |
Scripts¶
config.py— Chapter config, mock-LLM toggle, registry pathsprompt_templates.py— Reusable Jinja-style templates for zero-shot, few-shot, CoT, ReActllm_clients.py—BaseLLMClient,MockLLMClient, optional adapter for OpenAI / Anthropicevaluation_utils.py— Golden datasets, graders, A/B tester with bootstrap CIs
Exercises¶
- Problem Set 1 (notebook) — Rewrite a vague prompt, build few-shot examples, structured-output schema, classify a tricky example, count tokens, parse JSON
- Problem Set 2 (notebook) — Self-consistency, eval harness, injection detection, A/B test, ReAct loop, versioned registry
- Solutions — In
exercises/solutions/(notebooks andsolutions.pyfor CI)
Diagrams (Mermaid)¶
prompt_anatomy.mermaid,chain_of_thought.mermaid,evaluation_loop.mermaid
Read Online¶
- 12.1 Introduction — Prompt anatomy, zero/few-shot, in-context learning, structured outputs
- 12.2 Intermediate — Chain-of-thought, self-consistency, ReAct, tool/function calling
- 12.3 Advanced — Evaluation, A/B tests, injection defenses, versioning, production
Or try the code in the Playground.
How to Use This Chapter¶
Quick Start
Follow these steps to get coding in minutes.
1. Clone and install dependencies
git clone https://github.com/luigipascal/berta-chapters.git
cd berta-chapters
pip install -r requirements.txt
2. Navigate to the chapter
3. (Optional) Wire up a real provider
pip install openai anthropic
# All notebooks default to the bundled MockLLMClient — no API keys required.
4. Launch Jupyter
GitHub Folder
All chapter materials live in: chapters/chapter-12-prompt-engineering-and-in-context-learning/
Created by Luigi Pascal Rondanini | Generated by Berta AI