What is the correct order to try when an LLM isn't performing well enough?
spaceto flip
1) Prompt engineering (free, minutes). Better system prompts, few-shot examples, chain-of-thought. 2) RAG (no training cost, updatable knowledge). Retrieve relevant docs and include in context. 3) Fine-tuning (last resort, expensive). Update model weights on your data. Most teams jump to fine-tuning too early. Exhaust prompting and RAG first.