BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Amazon Released Incremental Training Feature in SageMaker JumpStart

Amazon Released Incremental Training Feature in SageMaker JumpStart

Bookmarks

AWS recently released a new feature in SageMaker (AWS Machine Learning Service) JumpStart to incrementally retrain machine-learning (ML) models trained with expanded datasets. By using this feature, developers could fine-tune their models for better performance in production with a couple of clicks.

In the new JumpStart feature, developers can incrementally retrain and fine-tune pre-trained models with no need to write extra code. This capability in ML is known as transfer learning to fine-tune a general model for a business-specific problem with a new dataset. It increases the accuracy of the fine-tuned model and reduces the cost of the model training. JumpStart also includes popular ML algorithms based on LightGBM, CatBoost, XGBoost, and Scikit-learn that developers can train from scratch for tabular regression and classification.

This recent feature is among the series of efforts to add more automation to SageMaker JumpStart. In December 2020, JumpStart launched initially as a new feature in SageMaker that helps developers quickly start using machine learning in production by using well-known best practices in deep learning models. JumpStart is a one-click feature that deploys end-to-end models for common business problems only by setting up some parameters and configurations. It has a collection of 300 models, like object detection, text classification, and text generation. These models are extracted from popular open-source hubs like TensorFlow, Pytorch, Hugging face, and MXNet. These features are available through Amazon SageMaker Studio (below picture), a friendly GUI to launch ML products as well as Amazon SageMaker SDK for better and easier embedding with production ML pipelines.

Amazon SageMaker Studio with JumpStart (Image source here)

As part of this announcement, Amazon published sample code in Jupyter notebooks for the developers. These notebooks contain code examples on how to use SageMaker JumpStart incremental training for the different applications and domains.

Eugene Orlovsky, head of engineering at Staircase shared in his Linkedin post about combining this capability with Sagemaker AutoML to optimize hyperparameter tuning :

With the last update, JumpStart got the possibility to train models incrementally. This way training of models with both the old data and new data will take much less time. Also, JumpStart received support for model tuning with SageMaker Automatic Model Tuning. This feature automates the process of searching for the best hyperparameter configuration for a model.

Other cloud platform providers like Azure ML and Google ML API provide capabilities to use the pre-trained model in production but incremental training requires more coding by the developers.

Open-source ML platforms like TensorFlow, Keras, Pytorch, and MXNet have the incremental training capability as part of their APIs for some ML applications like object detection and text analysis. Developers could use these APIs to do incremental training to fine-tune the models on their datasets.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT