BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Victor Dibia on TensorFlow.js and Building Machine Learning Models with JavaScript

Victor Dibia on TensorFlow.js and Building Machine Learning Models with JavaScript

Bookmarks

Victor Dibia is a Research Engineer with Cloudera’s Fast Forward Labs. On today’s podcast, Wes and Victor talk about the realities of building machine learning in the browser. The two discuss the capabilities, limitations, process, and realities around using TensorFlow.js. The two wrap discussing techniques like Model distillation that may enable machine learning models to be deployed in smaller footprints like serverless

Key Takeaways

  • While there are limitations in running machine learning processes in a resource-constrained environment (like the browser), there are tools like TensorFlow.js that make it worthwhile. One powerful use case is the ability to protect the privacy of a user base while still making recommendations.
  • TensorFlow.js takes advantage of the WebGL library for its more computational intense operations.
  • TensorFlow.js enables workflows for training and scoring models (inference) purely online, by importing a model built offline with more traditional Python tools, and a hybrid approach that builds offline and finetunes online.
  • To build an offline model, you can build a model with TensorFlow Python offline (perhaps using a GPU cluster). The model can be exported into the TensorFlow SaveModel Format (or the Keras Model Format) and then converted with TensorFlow.js into the TensorFlow Web Model Format. At that point, the model can be directly used in your JavaScript.
  • TensorFlow Hub is a library for the publication, discovery, and consumption of reusable parts of machine learning models and was made available by the Google AI team. It can give developers a quick jumpstart into using pre-trained models.
  • Model compression is a set of techniques that promises to make models small enough to run in places we couldn’t run models before. Model distillation is an example of one where a smaller model is trained to replicate the behavior of a larger one. In one case, BERT (a library almost 500MB in size) was distilled to about 7MB (almost 60x compression).

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT