← back to glossary

models

Mixture of Experts (MoE)

An architecture where a model is divided into many specialized sub-networks, and only a small subset of them activates for any given input — enabling very large models to run efficiently.

Last updated 2026-05-12