top of page

AI's emergent abilities are a big deal and will impact our work, our lives, and our society.

Updated: Feb 8

What's most profound about recent advances in generative AI is emergent abilities, which are not simply an extrapolation of past performance. AI models don't yet "think" or "learn" like humans, but they do analyze information in ways different than humans and are beginning to perform tasks for which they weren't explicitly trained.


For years, AI research focused on training models for specific tasks like image classification or language translation. A new paradigm has emerged with large language models (LLM) trained to predict the next word in a sequence. This allows them to be "prompted" for various tasks, even ones they weren't explicitly trained for.


Current LLMs can perform tasks like multi-digit multiplication without specific training, showcasing "emergent abilities." These new emergent behaviors arise from increased quantitative changes in the size of LLMs, leading to qualitative leaps in complexity.

Stanford researchers found that for many tasks, performance either predictably improves with scale or suddenly jumps from random to successful at a specific size threshold. This implies more is truly different, and further scaling may unlock even more capabilities.

The future of AI seems increasingly intertwined with these emergent abilities, offering exciting possibilities for more flexible and versatile models capable of tasks we haven't even imagined yet.


The Stanford researchers' paper notes:


"The figure below shows three examples of emergent abilities. The ability to perform arithmetic, take college-level exams (Multi-task NLU), and identify the intended meaning of a word all become non-random only for models with sufficient scale (in this case, we measure scale in training FLOPs). Critically, the sudden increase in performance is not predictable simply by extrapolating from the performance of smaller models."






11 views0 comments
bottom of page