Mixture of Experts (MoE)
An architecture where a model is divided into many specialized sub-networks, and only a small subset of them activates for any given input — enabling very large models to run efficiently.
Loading...
Related terms
Last updated 2026-05-12