BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Evaluating Continual Deep Learning: a New Benchmark for Image Classification

Evaluating Continual Deep Learning: a New Benchmark for Image Classification

This item in japanese

A new dataset called the CLEAR Benchmark: Continual LEArning on Real-World Imagery aims to establish a consistent image classification benchmark for future research in continual learning.

Retraining deep networks to adopt new data or tasks (e.g. new labels) is of practical importance in machine learning engineering. The challenge may establish itself distinctly in different fields. For example, in time-series forecasting, network training has to be carried out periodically due to the observed data drift, hence one may desire that the previous training iterations have a long-lasting positive effect on the test-time performance as the new ones are being scheduled. In computer vision, it may be necessary to add new labels to a classifier or to keep the classifier consistent with the natural evolution of objects, shape, color, and context in time.

In the CLEAR benchmark, time-stamped Yahoo Flickr Creative Commons 100 Million (aka YFCC100m) dataset images are used. The framework utilizes CLIP for pre-labeling and MTurk for crowdsourced verification. The main goal is to have a dataset with natural time evolutions of objects within a decade (2004-2014) for 11 classes. This may allow a more realistic comparison of new ideas as previous benchmarks depend on the modifications of existing datasets (e.g. Permuted MNIST, Split-MNIST, Split-CIFAR, CORe50).

Theoretically, gradient-based learning is local in parameter space and prone to what is known as catastrophic forgetting where a decrease in test-time performance is observed concerning the previous tasks/data when the machine learning model is updated. In practice, the general heuristic in machine learning engineering is to replay the previous data within the new training iterations, however, this is not very efficient as it leads to a significant increase in training time (i.e. linearly correlated with the cumulative data points). This makes continual learning a very active research field. Recently, a new research organization was also formed to enable knowledge sharing between research groups (i.e. the group maintains a common repository named Avalanche for models and data).

For more detailed information about the benchmark, the official webpage can be consulted. To discover more about continual machine learning research, the following review article can be a great source.

About the Author

Rate this Article

Adoption
Style

BT