← back to glossary

models

Context Window

The maximum amount of text an LLM can read and consider at one time, measured in tokens.

Last updated 2026-05-12