Featured image of post AI basics – prompt engineering

AI basics – prompt engineering

Prompt engineering: goals, zero-shot and few-shot, chain-of-thought, roles, step-back, and where good prompts come from

A «Bare minimum» article on how to phrase requests to large language models.

Goal and essence

Goal and essence

A prompt is a short technical spec for the model. You state what to do, how to format the answer, and what to rely on. The clearer the spec, the less the model has to guess.

  • Quality. Clear instructions reduce vagueness and improve usefulness of text, code, or structured output.
  • Predictability. Fixed formats (lists, JSON, paragraph templates) and explicit constraints make outputs repeatable across runs.
  • Building blocks. Treat the prompt as a mini-spec: role, context, task, output format, examples (if needed), success criteria.

Zero-shot prompting

Zero-shot prompting

The task is given with no input–output examples. The model leans on pretraining plus your instruction in the current request.

  • Works well for simple, unambiguous tasks when the desired format is obvious or one line away.
  • Example: sentiment classification (“label as positive / neutral / negative”) with no labeled examples in the prompt.

If zero-shot drifts, few-shot examples or explicit step-by-step reasoning (CoT) usually help.

Few-shot prompting

Few-shot prompting

In-context learning: you add one or more “input → gold output” pairs so the model picks up style, fields, and constraints.

  • Especially useful when you need a strict or unusual format — tables, JSON with fixed keys, report templates.
  • Examples act as a contract for the answer: fewer arbitrary interpretations.

Do not overload context: keep examples relevant, representative, and within the context window.

Chain-of-Thought (CoT)

Chain-of-Thought (CoT)

Reasoning chain: the model emits intermediate steps, then the final answer. That tends to stabilize logic, arithmetic, and multi-step tasks.

  • Few-shot CoT: examples show not only the answer but the reasoning path — the model mimics that pattern.
  • Zero-shot CoT: phrases like “think step by step” / “explain your reasoning first, then answer” often suffice.
  • Uncertainty-routed CoT: explore multiple reasoning lines or alternatives when the task is ambiguous, then compare or pick a justified conclusion.

CoT lengthens responses and latency; for trivial tasks a short instruction without reasoning may be enough.

Where prompts come from

Where prompts come from
  • You define the goal. The primary source is your statement of task, audience, and quality bar.
  • LLM-generated drafts. Ask the model for a structured prompt (role, steps, format), then edit manually.
  • Reverse engineering. From a desired output (or a great response), reconstruct and refine what in the prompt made it work.

In practice people combine model drafts, hard constraints, and iteration on real outputs.

Role-based prompting

Role-based prompting

Explicit role and perspective: “you are an editor”, “economist for non-experts”, “comedian in the style of …”. That steers tone, depth, and granularity.

  • Especially helpful for open-ended work: explanations, creative writing, advice when there is no single “right” format.
  • Role examples: public speaker, domain expert, comedian, teacher — role changes vocabulary, structure, and how bold the model can be.

Role complements but does not replace a clear task and constraints; “you are an expert” without context helps less than expert + goal + format.

Step-back prompting

Step-back prompting

A CoT-style variation: first step back to general principles, definitions, or a standard method, then apply them to the specific case.

  • Start with guiding questions: which laws, patterns, or concepts matter for this task?
  • Then map onto your instance: data, constraints, desired output.

Useful when failures come from jumping to an answer without anchoring on the right background knowledge.