Ch 12: Prompt Engineering & In-Context Learning - Intermediate¶
Track: Practitioner | Try code in Playground | Back to chapter overview
Read online or run locally
To run the code interactively, clone the repo and open chapters/chapter-12-prompt-engineering-and-in-context-learning/notebooks/02_advanced_prompting.ipynb in Jupyter.
Chapter 12: Prompt Engineering — Notebook 02 (Advanced Prompting)¶
This notebook covers chain-of-thought reasoning, self-consistency, ReAct loops, tool/function calling, JSON-mode parsing, and prompt patterns for retrieval cues — plus their limits.
What you'll learn¶
| Topic | Section |
|---|---|
| Chain-of-thought (CoT) reasoning prompts | §1 |
| Self-consistency (sample, vote) | §2 |
| ReAct: interleaved reasoning + actions | §3 |
| Tool / function calling and JSON-mode parsing | §4 |
| Retrieval cues and prompt patterns | §5 |
| Limits, failure modes, and when to stop adding prompt tricks | §6 |
Time estimate: 1.5–2 hours
Key concepts¶
- Chain-of-thought — Ask the model to "think step by step"; often boosts reasoning accuracy.
- Self-consistency — Sample several CoT chains and majority-vote the final answer.
- ReAct — Alternate
Thought → Action → Observationso the model can call tools mid-reasoning. - Tool calling — Expose typed functions; the model emits a structured call you execute and feed back.
- Limits — Prompt tricks plateau — at some point you need RAG (Ch 13) or fine-tuning (Ch 14).
Run the full notebook for code and outputs.
Generated by Berta AI