Socionext and a research group at Osaka University Institute for Datability Science have jointly developed a new method of deep learning, which enables image recognition and object detection in extremely low-light conditions. Led by Professor Hajime Nagahara, the development team merged multiple models to create a new method that enables the detection of objects without the need to generate huge datasets, a task previously thought to be essential.

Socionext plans to incorporate this new method into the company’s image signal processors to develop new SoCs, as well as new camera systems around such SoCs, for applications including automotive, security, industrial and others that require high performance image recognition. The research work will be presented at the European Conference on Computer Vision (ECCV) 2020, held online from August 24 through 28 (British Summer Time).

New Method Achieves the Goal of Improved Image Recognition Performance

A major challenge throughout the evolution of computer vision technology has been to improve the image recognition performance for applications such as in-vehicle cameras and surveillance systems under poor lighting conditions. Previously, a deep learning method using RAW image data from sensors has been developed, called “Learning to See in the Dark” [1].

However, this method requires a dataset of more than 200,000 images with more than 1.5 million annotations [2] for end-to-end learning. Preparing such a large dataset with RAW images is both costly and time prohibitive.

The joint research team has proposed the domain adaptation method, which builds a required model using existing datasets by utilizing machine learning techniques such as Transfer Learning and Knowledge Distillation.

The new domain adaptation method resolves that challenge through the following steps: (1) building an inference model with existing datasets; (2) extracting knowledge from the aforementioned inference model, (3) merging the models by glue layers, and (4) building generative model by knowledge distillation. It enables the learning of a desired image recognition model using the existing datasets

Using this domain adaptation method, the team has built an object detection model “YOLO in the Dark” using RAW images taken in extreme dark conditions, with the YOLO model [3]. Learning of the object detection model with RAW images can be achieved with the existing dataset, without generating additional datasets.

In contrast to the existing YOLO model where the object cannot be detected by correcting brightness of images (a), the proposed new method made it possible to recognize RAW images and detect objects (b). The amount of data processing time needed in this new method is about half of the original method, which uses the combination of previous models (c).

This “direct recognition of RAW images” by the method is expected to be used for object detection in extremely dark conditions, along with many other applications. Socionext will add this new method to its line-up of leading-edge imaging technology and SoCs for enabling advanced camera systems and applications requiring high-quality, high-performance image recognition. www.socionextus.com

Notes:
[1] “Learning to See in the Dark”: CVPR2018, Chen et al.
[2] MS COCO dataset as an example
[3] YOLO (You Only Look Once): One of the deep learning object detection methods

Hordon Kim, International Editor, hordon@powerelectronics.co.kr




추천기사

답글 남기기