Chain-of-thought (CoT) makes LLMs reason explicitly before answering.
Simple trigger: "Let's think step by step"
Why it works: Intermediate steps create "scratchpad." Model can catch errors. Each step conditions the next.
When to use: Math, logic, multi-step problems. Less useful for simple factual recall.
Zero-shot CoT: Just add "Let's think step by step."
Few-shot CoT: Include examples showing reasoning chains.
Interview question: "Design a prompt for solving word problems."
Use CoT. Show worked examples. Ask for step-by-step reasoning. Extract final answer at end.