Emergence
Emergence describes how large language models (LLMs) gain new abilities as they scale, often suddenly and unexpectedly. Unlike gradual improvements predicted by scaling laws, emergent skills aren’t present in smaller models and can’t be predicted simply by extrapolating from them.
These abilities appear sharply once a model reaches a critical size or dataset coverage. These abilities include complex arithmetic reasoning and in-context learning—where a model adapts to new tasks from examples within a prompt.
Emergent abilities show up because foundational sub-skills become reliable enough to combine, producing higher-level capabilities.
Emergence isn’t limited to text. Multimodal AI, combining vision and language, shows emergent abilities like interpreting images and generating code from UI wireframes or explaining memes. Emergence highlights the unpredictability of AI capabilities and underscores the importance of careful safety, ethics, and governance as models continue to scale.