Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News TensorFlow Learns Cucumber Selection and Classification

TensorFlow Learns Cucumber Selection and Classification

This item in japanese

What's been criticized as Google marketing in threads on Hacker News and celebrated by others as an example of the increasing ubiquity of deep learning, neural networks and machine learning, Makoto Koike detailed how TensorFlow learned his farming-family's discipline of cucumber selection and classification. The results were a greater success than he expected. Selection and classification is an often time-consuming process and can't be taught quickly to temporary staff during peak harvest season, and often results in long hours where Koike's family has to meticulously sort and classify cucumbers based on a number of attributes.

Koike used 7,000 images of cucumbers classified by his family as the training data set over the course of three months. When he put the network to the test he used a Raspberry Pi to control imagery data capture for processing by the trained neural network. The network achieved between 70 and 90 percent success rates on the experiment group, whose imagery data weren't included in the training set. The training data sets are available in the CUCUMBER-9 repo, and the TensorFlow python api code implementation is reportedly modified sample code from TensorFlow's Deep MNIST for Experts, but is not available in the repo at this time. The article didn't provide specifications about the compute profile Koike used to train the model, but did provide a demonstration of the trained network in action. On the efficacy and accuracy of the model, Koike stated:

"When I did a validation with the test images, the recognition accuracy exceeded 95%. But if you apply the system with real use cases, the accuracy drops down to about 70%. I suspect the neural network model has the issue of "overfitting" (the phenomenon in neural network where the model is trained to fit only to the small training dataset) because of the insufficient number of training images".

The classification problem fits a general pattern of a good deep learning candidate; imagery that falls into the "I'll know it when I see it" category, or intuitive and experience-driven classification that's often hard to describe with language easily, and often requires experience to be good at. Koike elaborated on the topic, noting:

"The sorting work is not an easy task to learn. You have to look at not only the size and thickness, but also the color, texture, small scratches, whether or not they are crooked and whether they have prickles. It takes months to learn the system and you can't just hire part-time workers during the busiest period. I myself only recently learned to sort cucumbers well"

Scalability and compute time is a challenge Koike faces with the current prototype, and even with the images converted to a low-resolution 80 x 80 pixel image the training model still takes two to three days to process the 7000-image training data set. Although Koike has expressed interest, he has not yet run training on Google's Cloud ML, which is marketed as a large-scale cluster for distributed TensorFlow training. He also noted he hasn't tested permutations of parameters, configurations, and algorithms as much as he wants to yet.

Rate this Article