A team of researchers from Stanford University and Google DeepMind announced on Wednesday the development of an AI model that achieves human-level performance on complex reasoning tasks. The system, named CogNet, scored 89% on a standardized test of abstract reasoning, matching average human performance.
"This represents a significant step forward in AI's ability to handle tasks requiring deep understanding and logical deduction," said lead researcher Dr. Elena Rodriguez. "Previous systems excelled at pattern recognition but struggled with true reasoning."
CogNet was tested on the Raven's Progressive Matrices, a non-verbal assessment of abstract reasoning. It also performed well on tasks requiring commonsense reasoning, causal inference, and counterfactual thinking—areas where AI has historically lagged behind humans.
The researchers used a novel architecture combining transformer networks with symbolic reasoning modules. Training involved both supervised learning on labeled datasets and reinforcement learning through trial-and-error problem solving.
Potential applications include advanced research assistants, educational tools, and decision support systems. The team has published their findings in the journal Nature and released a limited demo version for research purposes.