tech

January 8, 2026

LLMs contain a LOT of parameters. But what’s a parameter?

They’re the mysterious numbers that make your favorite AI models tick. What are they and what do they do?

LLMs contain a LOT of parameters. But what’s a parameter?

TL;DR

  • Parameters in LLMs are adjustable values that control the model's output, analogous to settings on a machine.
  • When training an LLM, parameters are initialized randomly and then updated iteratively to reduce errors.
  • Key parameter types include embeddings (numerical representations of words), weights (strength of connections), and biases (threshold adjustments).
  • Embeddings represent words as lists of numbers, capturing meaning and relationships in a high-dimensional space.
  • Weights and biases are crucial for processing text contextually within neural networks.
  • Hyperparameters like temperature, top-p, and top-k further influence the model's word choice and output style.
  • Smaller models can outperform larger ones through increased training data, techniques like overtraining and distillation, and architectural innovations like 'mixture of experts'.

Continue reading
the original article

Made withNostr