LLM-based AI agents: beyond chat, planning, memory, tools, ReAct, multi-agent patterns
Overview of large language models: capabilities, how they work, prompting, RAG, key concepts, and limits
Prompt engineering: goals, zero-shot and few-shot, chain-of-thought, roles, step-back, and where good prompts come from
RAG (Retrieval-Augmented Generation): why add retrieval to LLMs, pipeline stages, chunking, naive vs advanced RAG
Derivative, gradient, and chain rule — the backbone of neural network training
What math knowledge is needed for AI and machine learning, and why
Derivative, gradient, and chain rule — the backbone of neural network training
Slicing the circle into rings, unrolling into strips — and area πR² via a triangle