CAMERA

Incredible New Camera Mimics the Continual Movements of Human Eyes

Scientists from the University of Maryland have developed a new camera system that replicates the minuscule movements of the human eye.

Before diving into the novel research, a bit of groundwork is required. The researchers are working to revolutionize event cameras, which are specialized cameras with image sensors that react to changes in brightness and do not capture photos using a traditional shutter. Each pixel on an event camera’s sensor acts independently. They can track objects and camera movements more accurately and are specialized tools.

While event cameras are adept at providing data concerning movement in general, they struggle with a stable, clear vision of rapidly moving subjects.

“Event cameras are a relatively new technology better at tracking moving objects than traditional cameras, but today’s event cameras struggle to capture sharp, blur-free images when there’s a lot of motion involved,” explains the new research paper’s lead author, Botao He, a computer science Ph.D. student at the University of Maryland.

A labeled diagram illustrates an event camera system with components highlighted in different colors. Labels include the MCU, event camera, wedge prism, and servo actuator, each pointed out with colored arrows. The system is shown from two different angles.
A labeled diagram depicting the novel camera system (AMI-EV). | Image courtesy of the UMIACS Computer Vision Laboratory at the University of Maryland.

“It’s a big problem because robots and many other technologies — such as self-driving cars — rely on accurate and timely images to react correctly to a changing environment. So, we asked ourselves: How do humans and animals make sure their vision stays focused on a moving object?” He continues.

The researchers determined that the answer lies in microsaccades, which are fixational eye movements. These small, sometimes sudden, involuntary eye movements occur when animals with foveal vision, like humans, stare at something.

A soft pastel gradient background with shades of pink, blue, and purple. In the center, there is a small red dot surrounded by a faint, large circular gradient. The overall appearance is calm and abstract.
‘A demonstration of how microsaccades counteract visual fading. After a few seconds of fixation (staring) on the red spot in this static image, the background details of this image begin to visually fade. This is because microsaccades have been suppressed during this time and the eye cannot provide effective visual stimulation to prevent peripheral fading.’ | Image and caption courtesy of the UMIACS Computer Vision Laboratory at the University of Maryland.

“Saccades are the rapid, jerky eye movements that we use to scan the world around us. They are necessary because high-resolution vision is only possible with the high density of photoreceptors near the center of the retina,” writes Bart Krekelberg in Current Biology. “Microsaccades resemble regular saccades in most respects except that they are tiny and occur when we are attempting to fixate the eyes on a particular location. For this reason, they are often referred to as ‘fixational saccades.’”

To replicate the way human vision works in an event camera, researchers designed a rotating wedge prism in front of the aperture. The prism can redirect light and trigger events on the image sensor. The new camera is called the “Artificial Microsaccade-Enhanced Event Camera,” or AMI-EV.

“The geometrical optics of the rotating wedge prism allows for algorithmic compensation of the additional rotational motion, resulting in a stable texture appearance and high informational output independent of external motion,” the researchers explain.

Compared to a standard event camera, the enhanced camera can acquire more information about its environment and deliver richer, higher-resolution data concerning fast-moving objects.

“We figured that just like how our eyes need those tiny movements to stay focused, a camera could use a similar principle to capture clear and accurate images without motion-caused blurring,” He says.

At a simple level, eyes are constantly moving, but vision appears stable thanks to complex visual processing in the brain. The enhanced event camera recreates this with a continually rotating prism that redirects light — like parts of the eye redirect and focus light onto the retina — and a computer algorithm that corrects for every movement to produce a stable output from the shifting photons.

“Our eyes take pictures of the world around us and those pictures are sent to our brain, where the images are analyzed. Perception happens through that process and that’s how we understand the world,” says study co-author Yiannis Aloimonos. Aloimonos is a professor of computer science at the University of Maryland and director of the school’s Computer Vision Laboratory. “When you’re working with robots, replace the eyes with a camera and the brain with a computer. Better cameras mean better perception and reactions for robots.”

A series of images showcasing a person in motion captured by two types of cameras: AMI-EV and RGB. The frames are compared at various frame rates (25 fps, 50 fps, 250 fps, 1000 fps, and 10,000 fps). A line graph displays performance metrics. Text reads: "Our system can achieve accurate and robust performance at significantly higher framerate.
The enhanced event camera, AMI-EV, delivers better resolution at faster frame rates than prior event cameras. | Image courtesy of the UMIACS Computer Vision Laboratory at the University of Maryland.

The novel event camera proved very successful during testing, delivering improved resolution, image quality in low light, and reduced latency. The camera could also capture motion at tens of thousands of frames per second, something very few traditional cameras can do, even at extremely low resolutions.

“Our novel camera system can solve many specific problems, like helping a self-driving car figure out what on the road is a human and what isn’t,” Aloimonos concludes. “As a result, it has many applications that much of the general public already interacts with, like autonomous driving systems or even smartphone cameras. We believe that our novel camera system is paving the way for more advanced and capable systems to come.”

The complete research paper was published in Science Robotics.


Image credits: University of Maryland. Botao He et al., ‘Microsaccade-inspired event camera for robotics,’ Science Robotics. Header photo licensed via Depositphotos.


Source link

Related Articles

Back to top button