Sensor specialist iniLabs recently developed what they call a neuromorphic sensor able to mimic the human eye and the ways it processes information. Researchers at Kingston University in London, in cooperation with King's College London and University College London, are working on possible applications for this new discovery.
A camera captures photos and videos by taking a single or a series of pictures which forms a consistent sequence, that looks to us like an animation. “This can be a waste of resources if there’s more motion in some areas than in others” explains Professor Maria Martini, leader of one of the cooperating team of this project. "Like in an explosion, you end up with fast-moving sections not being captured accurately due to frame-rate and processing power restrictions and too much data being used to represent areas that remain static". By mimicking the way mammal’s eye functions, these problems could be avoided. The team developed a camera technology that modifies the sample rate according to the change in light condition, just like our own eyes. Only in this case, it’s not the brain that sends out signals, but a neurotrophic sensor.
This revolutionary change not only lower the hardware threshold for a camera to record high quality footage, but it also makes the camera smarter. "This energy saving opens up a world of new possibilities for surveillance and other uses" said professor Maria Martini, a developer of the project. "From robots and drones to the next generation of retinal implants". The team is currently focusing on improving the efficiency of the smart system information gathering. They are looking into the Internet of Things to explore the possibility to upload these information to a “cloud” and then process and spread them across machines and platforms, making it not only a better retina replacement, but a real smart system.