BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Amazon SageMaker Provides New Built-in TensorFlow Image Classification Algorithms

Amazon SageMaker Provides New Built-in TensorFlow Image Classification Algorithms

Bookmarks

Amazon is announcing a new built-in TensorFlow algorithm for image classification in Amazon Sagemaker. The supervised learning algorithm supports transfer learning for many pre-trained models available in TensorFlow Hub. It takes an image as input and outputs probability for each of the class labels. These pre-trained models can be fine-tuned using transfer learning even when a large number of training images are not available. It is available through the SageMaker Built-in algorithms as well as through SageMaker JumpStart UI inside SageMaker Studio.

Transfer learning is a term used in machine learning to describe the capacity to use the training data from one model to create another model. A classification layer is added to the TensorFlow hub model once it has been pre-trained, depending on the amount of class labels in your training data. The dense layer, fully linked, 2-norm regularizer, initialized with random weights, and dropout layer make up the classification layer. The dropout rate of the dropout layer and the L2 regularization factor for the dense layer are hyper-parameters used in the model training. The network can then either be fine-tuned using the fresh training data, with the pre-trained model included, or just the top classification layer.

The Amazon SageMaker TensorFlow image classification algorithm is a supervised learning algorithm that supports transfer learning with many pretrained models from the TensorFlow Hub. The image classification algorithm takes an image as input and outputs a probability for each provided class label. Training datasets must consist of images in .jpg, .jpeg, or .png format.

Image classification can be run in two modes: full training and transfer learning. In full training mode, the network is initialized with random weights and trained on user data from scratch. In transfer learning mode, the network is initialized with pre-trained weights and just the top fully connected layer is initialized with random weights. Then, the whole network is fine-tuned with new data. In this mode, training can be achieved even with a smaller dataset. This is because the network is already trained and therefore can be used in cases without sufficient training data.

Deep learning has revolutionized the image classification domain and has achieved great performance. Various deep learning networks such as ResNet, DenseNet, inception, and so on, have been developed to be highly accurate for image classification. At the same time, there have been efforts to collect labeled image data that are essential for training these networks. ImageNet is one such large dataset that has more than 11 million images with about 11,000 categories. Once a network is trained with ImageNet data, it can then be used to generalize with other datasets as well, by simple re-adjustment or fine-tuning. In this transfer learning approach, a network is initialized with weights, which can be later fine-tuned for an image classification task in a different dataset.

About the Author

Rate this Article

Adoption
Style

BT