Different light sources change the appearance of the same object, particularly its color. And while the human eye can recognize objects even under extreme scene light distribution, it is not the case for computer vision mechanisms. To improve CV-systems, we need a solution that will allow us to simulate some uniform lighting on any scene, which will make the 'appearance' of the object independent of the lighting.
For example, let’s take two images of an apple under varied lighting. They will look different under fluorescent lights than under the sun but remain recognizable to most people. In contrast, computer vision mechanisms may not necessarily be able to process it.
The major problem is that controlling how an object is illuminated can be hard, if not impossible. But for the purpose of color analysis, we need to come up with an algorithm to correct lighting after the image is taken.
This problem is mostly similar to color calibration and achieving accurate colors with color checkers in photography.
The general idea behind the solution to this problem comes down to:
Many computer vision tasks may benefit from color constancy, and computational color constancy can find a wide range of applications:
Scene light correction can be put to use in various industries and sectors of business, including healthcare, scientific research, entertainment, food and beverage industry, and other scopes of CV applications.
The first phase of the process, i.e., light (color) detection, is slightly more challenging and assumes higher computational cost. Deep learning mechanisms have been responsible for the most notable breakthroughs in the field of computer vision. However, deep neural networks have not gained the attention of researchers specifically on the topic concerned with color constancy. We have made an effort to close this gap and developed our own approach to training a convolutional neural network (CNN) to predict the scene illumination based on an image with limited field-of-view.
Further in the process, we came across another challenge. CNN draws representative data from large and varied datasets. Since there are very few datasets for color constancy, we needed to build a specific training pipeline. Here is a step-by-step through our model training process with every subsequent stage improving the results of the previous one:
Moving to the next part of the process, we faced the issue of changing the lighting in the scene. This step is the most difficult, both computationally and conceptually, as it is not completely clear how to accomplish it correctly. The current step is algorithmically comprehensible, but it concerns a lot of mathematical computations as well as the problem of obtaining a linear color space.
The primary reason is that almost all consumer digital cameras use the sRGB color space, which isn’t linear. But in order to change the scene light correctly, the image needs to be in linear color space. We performed the backward path from sRGB to linear space modeling. Since every camera has a different color image processing pipeline, it was a challenging process but we still succeeded.
Notably, the entire system consists of only two described implemented blocks. Here are the results from machine vision applications relying on daylight modeling:
We compared the results obtained from our approach to other solutions. The comparison confirmed its top-of-the-line performance out of all known datasets in terms of the general color-constancy metric.
We had also tested our solution on extremely illuminated images and noticed that it could use some improvement. During our research, we discovered the lack of chroma-based features in dataset distribution. To mitigate this problem, we once again extended the existing datasets using a custom augmentation mechanism. We achieved the most robust modeling and inference algorithms currently available.
The performance of our model can be pushed even further with new advancements in computer vision. New models and layers can improve its accuracy and efficiency. For instance, we know of several datasets for color constancy that have emerged just recently.
As we have stated previously, there is still limited research made in the direction of color constancy via deep learning. And as new tools appear and more tests are conducted, computer vision will become more capable of replicating human vision.
We are driven to enable our clients to become leading innovators in their industry. Together, we can broaden the evidence base for CV solutions to increase innovation and accelerate your business growth.