What To Know
- In a new study published in Nature, a team led by Ben Cowley at Cold Spring Harbor Laboratory, working alongside experts from Carnegie Mellon University and Princeton University, engineered a highly compact AI system modeled on primate visual neurons.
- For the latest on new developments in the AI industry, keep on logging to Thailand AI News.
AI News: Scientists have unveiled a pocket-sized artificial intelligence model inspired by monkey brain cells, potentially reshaping how machines process visual information while consuming only a fraction of the power demanded by today’s AI systems.

Image Credit: Thailand AI News
The human brain runs on less electricity than a standard light bulb, yet modern AI platforms require vast data centers to perform similar visual recognition tasks. Seeking answers, researchers turned to biology. In a new study published in Nature, a team led by Ben Cowley at Cold Spring Harbor Laboratory, working alongside experts from Carnegie Mellon University and Princeton University, engineered a highly compact AI system modeled on primate visual neurons. This AI News report highlights how the scientists compressed a sprawling 60-million-variable network into a lean model with just 10,000 variables—without sacrificing much performance.
From Fruit Flies to V4 Neurons
The journey began with inspiration from simple organisms and evolved into a focused simulation of the brain’s V4 neurons—cells responsible for recognizing colors, curves, textures, and complex shapes. These neurons help primates instantly distinguish between everyday objects, whether it is fruit arranged neatly on a shelf or the subtle contours of a familiar face.
Traditional deep neural networks can perform similar tasks but rely on massive computational layers and energy-intensive training. Cowley’s team wanted something different: a system small enough to fit in an email attachment yet powerful enough to mirror biological strategies. They trained their model using data derived from macaque monkeys, then trimmed redundant components using statistical compression techniques similar to those applied to digital images.
The outcome was striking. The compressed model not only retained high accuracy but also became transparent enough for researchers to observe what its artificial neurons were detecting. Some units responded strongly to curved shapes with defined edges—patterns resembling clusters of apples or oranges. Others reacted primarily to small dot-like features, echoing primates’ innate sensitivity to eyes.
Rethinking AI Efficiency
The implications stretch beyond academic curiosity. If biological brains operate with simpler internal models than today’s AI, engineers may be overcomplicating machine intelligence. Smaller, more efficient systems could transform applications such as self-driving vehicles, allowing them to differentiate pedestrians from drifting debris without relying on power-hungry hardware.
Experts not involved in the study suggest that updating AI architectures with modern neuroscience insights could unlock more humanlike perception. Current models still reflect decades-old assumptions about the brain. As scientific understanding deepens, so too may the sophistication—and efficiency—of artificial networks.
The broader message is compelling: nature has already solved many of the problems AI developers face today. By studying primate neurons and refining digital simulations accordingly, researchers are charting a path toward smarter, leaner, and more interpretable machines. Such breakthroughs hint that the future of artificial intelligence may not depend on building larger systems, but on building wiser ones.
For more details, visit:
https://www.nature.com/articles/s41586-026-10150-1
For the latest on new developments in the AI industry, keep on logging to Thailand AI News.