Imagine staring at a drawing of a cube, a simple, elegant shape made of intersecting lines — a child’s toy, in essence. Yet, stare long enough, and suddenly the orientation of the cube appears to shift, flipping inside-out. This is the Necker Cube, an optical illusion discovered in 1832 by the Swiss crystallographer Louis Albert Necker.
Just as the Necker Cube invites us to toggle between two realities, Rubin’s Vase challenges us to see either a vase or two faces in profile, an image that continually slips from one form to another.
Both the Necker Cube and Rubin’s Vase are classic optical illusions. They confound even our best-trained brains by toggling between different interpretations. However, scientists have coaxed artificial intelligence to fall prey to the same illusions. But to get there, they had to do something rather radical — inject a bit of quantum physics.
The Challenge of Teaching AI to “See” Like Humans
Most AI-based vision systems can recognize faces, and categorize objects, and some can even make new images from scratch. But give them a trickier challenge — like an optical illusion — and they fall flat. For a long time, computers could handle the math behind images, but they couldn’t mimic the strange, fluid way human perception shifts back and forth between interpretations.
For Ivan Maksymov, a research fellow at Charles Sturt University in Australia, this challenge represented something bigger: a chance to explore the core question of whether AI can ever truly see the way we do. “Optical illusions trick our brains into seeing things which may or may not be real,” Maksymov explains.
To help AI grapple with illusions, Maksymov turned to a principle from quantum mechanics known as “quantum tunneling.” This phenomenon lets subatomic particles seemingly bypass barriers that, by the laws of classical physics, should be impenetrable. It’s the type of physics that feels pulled from science fiction, and Maksymov wondered if it might open new doors for AI.
Building a Quantum-Tunneling Neural Network
To make AI see in this quantum-inspired way, Maksymov’s team built what they call a “quantum-tunneling deep neural network.” In regular neural networks, artificial “neurons” process and store data in layers, with each layer doing a bit more of the work to produce a final answer. But Maksymov’s design took an unconventional approach, embedding the process of quantum tunneling into the network itself.
Imagine each neuron as a tiny worker, stacking data to reach a threshold, like a person trying to jump over a fence. In normal neural networks, neurons that can’t meet that threshold remain inactive. But Maksymov’s neurons, inspired by tunneling, are occasionally allowed to “jump” through this barrier without hitting the threshold—just as an electron might pass through a wall in quantum physics. This tweak allowed the AI to react to images in ways that mirror the brain’s own flexibility.
<!– Tag ID: zmescience_300x250_InContent_3
–>
As the AI examined illusions like the Necker Cube and Rubin’s Vase, it did something remarkable: it produced not one, but multiple interpretations. Sometimes, it saw the shaded cube face at the front; sometimes, at the back. Occasionally, it even hovered between the two, as if it couldn’t make up its mind.
“When we see an optical illusion with two possible interpretations . . . we temporarily hold both interpretations at the same time, until our brains decide which picture should be seen,” Maksymov wrote in a blog post. His AI model captured this uncanny, human-like ability to see two things at once.
A New Kind of Vision
Why push AI into this odd realm of optical illusions? Maksymov envisions applications that could transform human experiences in high-stakes fields. Imagine, he suggests, a pilot flying through thick clouds, relying on instruments to tell up from down. Misinterpretation can be deadly, and the pilot’s brain, like the AI’s, must sort through ambiguous signals to arrive at the truth. An AI capable of understanding ambiguity might one day help avoid such disorientation.
There’s also hope that quantum-tunneling AI could one day aid in medical diagnoses. When AI models are trained on ambiguous patterns, they might help detect early signs of cognitive decline, like dementia, where the ability to recognize visual patterns often diminishes. These applications could one day give AI systems that not only see images but also interpret them in a way that feels uniquely human.
It may be years, perhaps decades, before AI systems with this level of insight appear in everyday technology. For now, however, Maksymov’s quantum-tunneling neural network remains an awe-inspiring experiment, challenging the boundaries of what it means to truly “see.”