Skip to content

Ch 12: Prompt Engineering & In-Context Learning - Introduction

Track: Practitioner | Try code in Playground | Back to chapter overview

Read online or run locally

You can read this content here on the web. To run the code interactively, either use the Playground or clone the repo and open chapters/chapter-12-prompt-engineering-and-in-context-learning/notebooks/01_prompt_basics.ipynb in Jupyter.


Chapter 12: Prompt Engineering — Notebook 01 (Prompt Basics)

This notebook covers the anatomy of a prompt, zero-shot vs few-shot vs in-context learning, the difference between system and user messages, and how to produce structured outputs with Pydantic.

What you'll learn

Topic Section
Prompt anatomy: instruction, context, input, output spec §1
Zero-shot, few-shot, and in-context learning §2
System vs user vs assistant messages §3
Structured outputs with Pydantic schemas §4
Sensitivity to wording, ordering, and examples §5

Time estimate: 1.5–2 hours


Key concepts

  • Prompt anatomy — Separate instruction, context, input, and output spec for clarity and reuse.
  • In-context learning — A few examples in the prompt let the LLM "learn" a new task without weight updates.
  • System prompt — Persistent role/behavior instructions; user messages carry the task.
  • Structured outputs — Constrain output to a schema (Pydantic / JSON) and validate before downstream use.
  • Sensitivity — Small wording or ordering changes can swing behavior — measure, don't guess.

Run the full notebook in the chapter folder for code and outputs.


Generated by Berta AI