Weak AI

Weak AI or narrow AI refers to systems designed to perform a specific task rather than replicate general human intelligence. These systems can process information and produce outputs that appear intelligent within their defined scope, but they have no understanding, consciousness, or ability to transfer their capabilities to tasks outside that scope. A navigation app that excels at routing cannot play chess and a chess engine cannot hold a conversation.

All AI in use today falls into this category. Virtual assistants like Siri and Alexa, email spam filters, social media recommendation feeds, and generative AI models are all examples of weak AI. Each is highly capable within its area but fundamentally limited beyond it. This is distinct from artificial general intelligence (AGI), which would involve a system capable of reasoning and learning in any domain at a level comparable to humans and artificial superintelligence (ASI), which would surpass human intelligence altogether.

The main criticism of weak AI is not its performance within a given task but the risks that come from deploying narrow systems in high-stakes environments. A self-driving car that misreads its surroundings or a hiring algorithm that reflects historical bias can cause real harm precisely because the system has no broader judgment to fall back on.

Share