BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Getting Started in Deep Learning with TensorFlow 2.0

Getting Started in Deep Learning with TensorFlow 2.0

Bookmarks
44:51

Summary

Brad Miro explains what deep learning is, why one may want to use it over traditional ML methods, as well as how to get started building deep learning models using TensorFlow 2.0. He walks through an example step-by-step of how to build an image classifier in Python, and then showcases how to leverage a technique called transfer learning to make building a model even easier.

Bio

Brad Miro is currently a Developer Programs Engineer at Google where he specializes in machine learning and big data solutions. He is passionate about educating the world about artificial intelligence both by empowering developers and improving societal understanding.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Miro: My name is Brad Miro. I'm a developer programs engineer at Google. What that means is I spend the majority of my time split between traditional software engineering and then doing events such as these, and going and speaking, meeting with people and just getting facetime, and I personally love meeting with customers and just meeting with developers and learning about the things that you're working on and how we at Google can help you do your work better. I specialize in machine learning and big data solutions, which is why today I'm here to tell you about TensorFlow, specifically getting started with deep learning in TensorFlow 2.0.

Some of the things that I'm going to be discussing with you today are just an introduction to deep learning generally, also touch on what machine learning is as well, then I will introduce TensorFlow to you, tell you what that is as well as how TensorFlow is being used internally at Google. I'll then discuss 2.0 with you as well as go into some examples of some of the things that you're able to do with it. Then we'll discuss how you can actually get started and get TensorFlow into your hands and begin developing machine learning and deep learning models. Let's get to it.

Deep Learning

We'll start off with an introduction to deep learning. I just want to do a quick shout out to my colleague, Dale Markowitz. There's a bunch of doodles that you'll be seeing in the future slides and these are all thanks to her brilliant artistic work. Her and I are also going to be doing a workshop on Thursday, doing natural language processing and machine learning on social media data, so hopefully we'll see you there.

In traditional machine learning methods, oftentimes you'll have labeled data, a lot of the times it's in the form of numeric data. In this case, we may have an example of data pertaining to cats and dogs. Let's say for each of these animals you might have the height, you might have the width of the animal and you might have the weight, and then using these you can pass them into a traditional machine learning model such as linear aggression, and then by training on the actual data itself, you might be able to create some form of best fit line where given any new data, if it's on one side of the line, we can identify this as a cat, otherwise we could identify this as a dog.

If we're use something like linear regression, we'll start by using random parameters and seeing how the line might end up looking and then we'll eventually train and fix the error that we get from just by training it to get a more accurate model.

Then what happens if we go to pictures, if we were to use something such as images? Images tend to have what we call a unstructured data to them and that they're a bit less obvious as to what the actual data is at a computer level if you're a human that's looking at this. For instance, if you're looking at data that pertains to cats and dogs where you have height, width and weight, those make sense to us. We can look at that and see and understand what this means, but if you're just looking at pixel data which is effectively what images are, to a computer, that's harder for us to understand.

This is where we can introduce something called deep learning which is a framework, and I use an algorithm for creating models on what we call unstructured data. In this case, we could use pictures of cats, pictures of dogs that are labeled, so we'll have a corpus of data that says these are pictures of cats and then a corpus that says these are pictures of dogs and we can train them on the neural network.

Let's say we have this image of a cat, the data is represented as each pixel has its own red, green and blue values. To a computer, this ends up just becoming a three dimensional tensor of values where each value represents the red, green and blue values for the, given the X and Y coordinate of the picture.

A neural network is effectively a machine learning algorithm that we just classify under the deep learning umbrella where you have several layers of what we call neurons. Each of these tend to do different things, but at a fundamental level, what this'll do is it will take all of your input data, so in this case, the pixel values of the photos, it might sum them up and then apply what's called an activation function. An activation function can really just be any function applied to the data that removes the nonlinearities out of the data. A common one is what's known as the ReLU function which effectively just takes all of the values that are less than zero and sets them to zero and leaves the positive values as they are. What this'll do is remove the nonlinearities to actually have the model be representative of something.

If you pass data through the model, it'll end up giving you a prediction. In this case, we're just doing binary classification, so the output will tell us what is the likelihood that this data is a picture of a cat, the likelihood of that it's a picture of a dog. In this case, if we pass in a cat photo pixel values, and then it's telling us that there's 20% chance this is a cat and 80% chance this is a dog. Obviously this is not what we want, so then we'll train on this and actually adjust the values of the model so that we have a better understanding that this is actually a cat photo and not a photo of a dog.

What we're going to end up doing is we will train this model by using pictures of cats and pictures of dogs and passing them through each layer of the model. Since this is labeled, the model's able to learn that these are pictures of cats, these are pictures of dogs so that when it sees new pictures it's able to predict what it is and hopefully with very high accuracy. If you have a new picture, so for instance, if you're looking at something like this, it might be a little bit difficult to discern if it's a cat or a dog if you're a human, but the idea is, is that a computer can do this exceptionally while using these deep neural networks.

I want you to notice that there's multiple layers here. What actually makes up a neural network fundamentally is just three things. It's the input layer, which is the layer of the model where you actually feed in the data, then the output layer which is what gives you a prediction. You can see here at the very top that there's a layer that just has two dots on it, and so one of those will be for a cat, one of those will be for a dog. Then all of those layers in the middle are what we call the hidden layers. These are what actually give the deep learning its name, that these are deep models. Really what these layers are doing is that they're looking for features within the data.

The idea here is that if you're passing in an image, the pixels in and of themselves don't actually mean much by themselves. What deep learning is really good at doing is actually taking these pixels and creating meaning to a computer. That's done with these different series of layers, and in this case we have a a five-layer neural network. We have the input layer where you give it the picture, the output layer, which is going to give you the prediction and then the three layers in the middle.

The way that this works is that as you progress through the model, the layers end up giving you more detailed features and it's able to discern more intricate parts of the image. The first layers are going to be looking for things such as - it generally looks at edges. Edges typically consist of lines, curves and just very basic shapes. The second layer in this case might end up looking for more intricate shapes, so you might start to see circles and ovals and squares, and sometimes you might even be able to discern shapes as they pertain to the images that you're training on. Then the last layer is where you really start to see things that are representative from the data, so in this case, you might start to see pods or faces of dogs or cats. Then you bring this all together and it gives you a prediction with some degree of accuracy.

When should you use deep learning? Take a lot of these with a grain of salt but the general idea is that deep learning works exceptionally well when you have a lot of data. If you use traditional machine learning methods, a lot of them will actually tend to plateau as you keep feeding them data, the models just won't tend to get better once you pass some threshold for feeding it in data, but neural networks is actually really great at continuing to use new data to be able to improve the accuracy or whatever metric it is that you're trying to optimize for.

Deep learning is great for problems that we call complex, so the traditional examples of this includes speech, vision, natural language. You actually saw a great example of this in the keynote this morning when Ashi [Krishnan] was using general adversarial networks, so that is a method of deep learning, so you're all in the right place following the keynote which is awesome.

These complex problems typically deal with what we call unstructured data. These are data sets that again, just on their own to a computer don't necessarily mean much or even to us. If you were to look at a sound wave of me speaking right now, it probably wouldn't make any sense to any of you, but the idea is that a computer is able to attempt to use these and leverage these to attempt to figure out what these mean. So, using unstructured data, speech, vision, natural language are three of the canonical examples that we like to talk about.

If you're looking for the absolute best model, if going from 99.9% accuracy to 99.99% accuracy is important to you which in a lot of cases it is, deep learning is a really great example, a really great algorithm for you to attempt to start using it to increase the accuracy or whatever metric of your model that you're optimizing for. That being said, there's cases where you shouldn't necessarily use deep learning. From what I've seen just in speaking with data scientists and machine learning engineers is that a lot of people are very eager to use deep learning and just sort of - what I like to say - deep learning all of the things and just sort of throw it at any problem that they have. In some cases, that's correct but at a lot of times it can end up being overkill for some of these reasons.

One is that you don't necessarily have a large data set. As I was mentioning, deep learning is very data intensive and to really get the benefits out of using deep learning you want a huge data set. There are ways around this for sure but if you're going to build a model from scratch, you definitely want a lot of data.

You may be performing sufficiently well with traditional machine learning methods, maybe linear aggression works for you, maybe a random forest works for you. Something about deep learning is that it tends to be very computationally expensive to train, it can take a lot of time. There are some models that may take on the order of multiple weeks to train. This doesn't necessarily happen with some other methods. If you're going to end up using deep learning, you really want to make sure that this is correct for your use case. You definitely want to try other machine learning methods first.

If your data is structured and you possess the proper domain knowledge to perhaps do feature engineering, maybe deep learning isn't necessarily for you just because you're able to leverage finding those features that deep learning would otherwise be able to do for you.

TensorFlow

Now you know about deep learning, maybe we should talk about how you can get started doing it. I happen to know a framework that's perfectly well suited for deep learning. TensorFlow.

I'm just going to tell you a little bit about what TensorFlow is. It is an open-source deep-learning library that is developed by Google. We work on it both internally in both in very close collaboration with the engineering communities, it's up on Github. What TensorFlow is, it contains a lot of utilities to help you write in neural networks.

The thing about deep learning is that at a low level, there's a lot of mathematics that goes into it. It's a lot of statistics, a lot of calculus, a lot of linear Algebra, and then between that and having the computer actually optimize for those things at even lower level, that's a lot to learn to get started building this stuff. What TensorFlow is able to do is, it contains a lot of those libraries and lower level utilities built into the framework itself so that you can start to leverage these.

TensorFlow also has support for GPUs and TPUs, standing for graphics processing units and tensor processing units respectively. These are pieces of hardware that are perfectly suited for doing computations on matrices, which is effectively what TensorFlow and deep learning in general is doing, and so this is built right in.

TensorFlow today has over 2000 contributors on the Github which is super exciting to just see that so many people are interested in building on this and growing with it and using it in their projects.

The Beta version of 2.0 actually was just released this month. The Alpha version was released in March and it's awesome to see that the project is just continuing to grow.

TensorFlow has been around externally since 2015. It's actually existed before that at Google internally since this is what we use to power all of the artificial intelligence that we do in-house. As you can see, there's just been so much growth on the project, so many different libraries and different connectors have been developed for this as the projects continue to grow. It's used all over the world. I love looking at this map and just seeing so many orange dots just all over the place. We have it of course, here in the northeast on the West Coast, Europe, Japan, just a lot of really awesome places to see TensorFlow being used. Just some more numbers - over 41 million downloads, 55,000 commits, a lot of growth on this project and it's just super, super exciting to see. I love looking at these and seeing just the, the adoption of the product.

TensorFlow @ Google

I want to tell you a bit about how we use TensorFlow internally. This is what we use, this is machine learning at Google, we use TensorFlow. Just to give you some examples, we have a lot of data centers. Given the scale that we're operating at, the data centers naturally end up using a lot of power. What we're able to do is, we're able to use artificial intelligence in tandem with TensorFlow to help optimize these. The idea here is that we can optimize power consumption and it ends up being better for the environment, costing less money of course. It's really good that we're able to leverage these sorts of things to improve our data centers.

On a more consumer-facing realm, we use TensorFlow to power Google Maps. For those of you who may have used this feature before, what you can actually do with your phone is add augmented reality to your experience where if you're just walking down the street, as you can actually an see an example here, it can point you in the right direction of where it is that you're trying to go, and TensorFlow is what's powering the augmented reality in this experience.

Also, in the realm of mobile, TensorFlow is used to power portrait mode on Google Pixel, which I personally think is really awesome.

Then a bit more on the research side. Here's an example of music being generated. The idea here is that you can move the slider around this, what's effectively a synth pad, and then given the X and Y coordinates, it can actually predict what the synthetic sound should be. This is running in the browser using a TensorFlow.js, for Javascript, which I'm going to tell you a little bit more about later.

We're using TensorFlow for medical research. On the left here you'll see an image of what we would consider a retinal image for a healthy eye, and then on the right, we have one for a patient that's been diagnosed or may potentially have what we call diabetic retinopathy. We're actually able to use computer vision trained with TensorFlow to be able to detect which one of these might be a healthy eye and which one of these might possibly be an unhealthy eye. There's a lot of really awesome work going on in the medical community especially in the realm of computer vision and radiology, so, super exciting.

Then this one is my personal favorite. I love astronomy and I'm a huge astronomy buff. We're using TensorFlow to help detect exoplanets and planets that exist elsewhere in the galaxy. Here’s a bit about how this works. Imagine that I'm holding a flashlight and then I move my finger in front of it, the light that you'd be able to see would diminish a little bit. The idea here is that this concept works similarly with distant stars. As objects pass in front of it, the brightness will decrease and we're actually able to monitor this and then have these graphs like you can see on the right here, and then we're able to predict if these are actually representative of planets or not planets or what have you, and we can use TensorFlow and these sorts of models to help predict this.

TensorFlow 2.0

Let's talk a bit specifically about 2.0. We just introduced TensorFlow generally but now I want to share why I'm up here talking to you about 2.0 and why I'm personally really excited about it. For those of you who have used TensorFlow before, and I know I'll speak to this personally as well, there are sometimes parts of it that can tend to be frustrating. There are several different ways to do certain things. Running your models using session.run can sometimes feel a bit unnatural if you're used to using Python more traditionally, and so we've taken feedback both from engineers internally and also engineers in the community to help improve the developer experience.

We want to make TensorFlow as easy to use as possible, and so one of the first things we've done is we've taken the APIs and we've simplified them. Those are multiple ways of doing the same thing. We've removed a lot of those so that there's just one or two ways to do something to make your lives easier. We've also incorporated Keras as the default high level API for TensorFlow. For those of you who have used Keras before, you might agree that API is just super easy to use. It allows you to build deep neural networks very easily, each layer ends up being a single line of code, and we'll show some examples later.

The reason for this is that Keras is a really nice developer experience. Keras in and of itself it's an API spec where what that means is that Keras in itself does not have an engine powering it, it actually relies on something such as TensorFlow or Theano to power it. We've just adopted Keras into TensorFlow, but we'll talk a bit more about that later.

We've also adopted eager execution by default which means that your TensorFlow code runs effectively like non-PII, just line by line. We also want TensorFlow to be powerful. In 2.0 it's perfectly suited for research, production, any use case for any workload. Given that we use it at Google, it runs at Google scale so it's perfectly scalable to your use case in a distributed manner both in the cloud, both locally. Depending on your use case, there should be a solution there for you.

TensorFlow is extremely flexible and you can deploy it, what we like to say is effectively anywhere. You can use TensorFlow Extended to deploy it on your servers, you could use TensorFlow Lite to deploy it on edge devices, so your phone or Raspberry Pi, and then we also have TensorFlow.js to deploy it in the browser. You're going to actually build models and then run them in browser applications natively.

How does this work? Well, the idea is that any model that you build with TensorFlow can be exported to what we call just a SavedModel format. On the left here, you see what might be a traditional training pipeline where you load the data in, you then use something like Keras or the TensorFlow Estimators which are effectively black box models. You distribute it over your hardware needs, so that can be CPU, GPU, TPU, and then you can export it into a SavedModel and then load it anywhere. In this case, you could TensorFlow Serving as a part of TensorFlow Extended, you could use TensorFlow Lite for edged devices, and then you could also use a TensorFlow.js as well as any of the other language bindings that we have available. For instance, either C, Java, Go, C Sharp, Rust, just to name a few. A lot of these are community-driven but they're super interesting projects and I definitely recommend checking them out if any one of these languages are your forte.

We also have a lot of other projects that are being worked on that use TensorFlow in the back end. To go into a few of these, you can notice they're all TF-related in the naming of some degree. For instance, one of these is TensorFlow Agents which contains tools for building reinforcement learning algorithms built right on top of TensorFlow. TF Text is a natural language processing library that has a lot of tools built on top of TensorFlow, so super interesting, and these are more niche projects depending on your use case.

Earlier I was discussing eager execution and what that means is that you can effectively run TensorFlow 2.0 like non-PII. For those of you who have used TensorFlow 1.X, you'll remember you needed to do session.run and initialize your variables, and there's just a lot of TensorFlow specific things that you had to do which could sometimes be frustrating or confusing for getting started. Now you don't need to do that. This code runs, this is just a few lines of code that it runs exactly like non-PII, defining a constant, so this is just a 2x2 matrix multiplying it and the printing it. It works.

Here are some specifics in terms of the API of what's gone and what's new. I keep mentioning this, session.run, no more, don't use it, you don't have to worry about it. The stuff runs just like non-PII, so you can use it just like non-PII. Control dependencies, global variables, initializer - gone. TF conditional and tf.while_loop, also gone. You can now just use Python native reserve words, so you can use if statements, while statements on your TensorFlow code. We'll discuss later why that is the case, but for now you can take solace and knowing that that's a thing.

Then tf.contrib has been moved out of the core TensorFlow library. It's an awesome package, it has grown so large too. We're all super excited about that, but it just became too large to include in the core distribution so we've just moved it outside of TensorFlow into a separate repository, so that's just something you should know.

For what's new, Eager execution is enabled by default, Keras that I keep mentioning is the high level API, and then this new thing that we call tf.function. This is why we were able to get rid of the conditionals and the while loops inside of the core TensorFlow library because you can use tf.function which is just a wrapper on top of any Python code that will then get translated into being run on the TensorFlow engine.

tf.keras

I want to talk a little bit about Keras now. I keep mentioning it and I want to give a little bit more love. We are using Keras as our main high-level API. I mentioned earlier, Keras serves as an API spec, so we're using it in TensorFlow. This exists just, you can import TensorFlow how you would normally, and then tf.keras is what we're using. The creator of Keras, François Chollet, actually works at Google, so we work very closely with him to make sure that the development on the Keras library itself - which still exists separately from tf.keras, but we want the libraries to be very similar and create as great of a user experience as we can.

For those of you who have used Keras, you would import Keras like you would normally just import it, but then tf.keras is just from TensorFlow, import Keras. It's exactly the same, it gives you the same experience and you'll see very familiar code.

When you're actually writing code with Keras, there are two ways that I like to describe that you can go about doing it. One is what we say is for beginners, the other we say is for experts. Those are very loose definitions, I personally love the beginner method more, I think it's easier and has suited my use case more than the other method has. This here is effectively a neural network. This is a three-layer neural network that has an input layer, just a dense layer, and then an output layer, just a few lines of code. You create these as a sequential model object, you then compile the code. What this will do is just make sure that the models actually line up.

The idea behind when you build a deep learning network is that the inputs, like what goes in and what goes out of each layer just has to be the same size. This will just effectively make sure that this works and that the objects inside of a sequential object are valid. Assuming this runs, you're off to the races and you can fit the model on your training data and then evaluate it on your test data. This is what we say is for beginners. Then for what we say is experts, you can subclass your model. You can use traditional Python classes to create your own model and then just add a column method and be able to treat this like an object.

What’s the Difference?

What's the difference between these two? You obviously saw visually that they look a lot different, the code is different. The idea here is that if you're using a symbolic or what we say is for beginners method, your model ends up running or it's a graph of layers. If it compiles, it'll run. This will really help you, it'll save you a lot of time by helping you debug and catching those errors at compile-time, so you don't have to necessarily worry about this once it makes it to production, and so really removes a lot of the headaches for you.

Then the subclassing method, your model runs as Python bytecode. You do have complete flexibility and control of what you're working on here, but by doing that it does become harder to debug, harder to maintain, and it makes it more work for you. Depending on your use case, you'll end up going with one of these, try both to see what works, and they both definitely have their pluses and minuses.

tf.function

I want to talk a little bit about tf.function. Let's say we have some basic code here, we're creating a function that just returns an LSTM cell. This is effectively just a type of layer in a neural network. There are different types of layers, I don't want to go too much into this just for the sake of staying focused, but the idea here is that you can use different layers. Here we're just creating a function that calls one of these.

Let's say we want to speed this up, we add the tf.function wrapper to it. We might have another function such as this, which is just a summing up over an array and then running the hyperbolic tangent function on it. By using tf.function, we can use this in TensorFlow 2.0. What actually happens when we do this? We've included this nifty function here which is tf.autograph.to_code. If you pass in a function, it'll actually show you what the code is compiling to. You don't need to memorize this by any means but it's sometimes interesting to see. This fairly readable python code ends up looking like this which is, of course, much more messy and something you should never write yourself. This is how it works if you're ever curious now you can definitely use that function to see.

tf.distribution.Strategy

Distribution strategies. The idea here is that if you're using TensorFlow, let's say on your laptop, you want to start small. Often when you're building machine learning and deep learning models, you want to start locally and start on a subset of your data to make sure that things are actually working and you're seeing some progress.

Let's say you work on that and then you want to actually move it to your production cluster. Here you may have some code running on your laptop. Then to add it into production over your hardware requirements, all you have to do is just add it to or add it within the scope of your distribution strategy. In this case, I'm showing you an example of the marriage strategy which is effectively able to take your model and just copy it over multiple GPUs. It's super simple to do before, after, it's all you have to do.

tensorflow_datasets

Next I want to tell you about a TensorFlow datasets. One of the problems that I see facing a lot of organizations is not necessarily in their models themselves but actually being able to acquire the correct data. Models are only as good as their data and so you need to make sure that you're working, that your data's right, that it works for your use case. What we've done is we've actually provided a lot of data sets available for you via the TensorFlow datasets library. They're super easy to use, you just import it, do a pip install, then you import it and then you can just load based off of the name. They come with the training and tested pre-split for you, and then you can just load them in and use them how you would any other dataset.

We have tons of examples available for you at tensorflow.org/datasets. Some of these might be familiar to some of you who have worked in machine learning. You have the titanic data set, IMDB, cats and dogs data set that I showed you in the previous slide, image net, CFR 10, is a bunch of really awesome examples available for any use case.

Transfer Learning

I mentioned earlier that you only really want to get started building deep learning models if you have a lot of data. In the event that you don't, there's actually a technique you can use called transfer learning which can help save you some time, save you some resources and actually getting a working model. How does transfer learning work? If we remember this slide, I was explaining to you how in each of the different layers, each layer ends up doing a different thing. Earlier on the layers are looking just for edges, straight lines, curves, the next layer might be looking for shapes, circles, squares and then the last layers may end up looking for higher-level features. In the event of cats and dogs, it might be looking for a paw.

The idea is, these earlier layers across different models or in different use cases actually end up looking fairly similar. A straight line is going to be a straight line if you're comparing cats and dogs, if you're comparing boats and trucks buildings to anything, it's ends up being the same. What we're actually able to do is, we can leverage a pre-trained model and use it for our use case, which ends up saving you a lot of time. We have transfer learning built into the TensorFlow library in a variety of ways. One of the ways you can do this is, if you're using Keras, we have this package called the tf.keras.applications, and then you can load this in and you can include the weights and the data set that this model should be trained on, in this case, we said imagenet.

We can set this model just being trainable to false which means that as we're actually training our model, we don't end up adjusting the weights on this since this is already pre-trained. We know this works, we don't want this to change, so we can tell Keras to just leave it alone, and then we can just add it to our model, and you can see here at the bottom of the slide that this base model just ends up becoming another layer. That's the majority of the model that we then don't have to train. This ends up fully training the model faster and in a lot of cases more accurate because a lot of these other models are trained on these large datasets. It just ends up making your life a lot easier.

We also have TensorFlow Hub. This is a project that contains many different pre-trained models for you, tfhub.dev, you can take a look at all of these and look for models that are particular to your use case. We have examples for computer vision, we have examples for natural language, speech detection. There's a lot of awesome ones here, so I definitely encourage you to check this out if you're interested in building models. What I think is honestly really cool is between the TensorFlow data sets library and then between tf.hub, you can affect, you have everything you need to build the model. You have the data, you have the pre-trained model, and you can mix and match these and really get anything that you want, which I think is really interesting.

Upgrading

We talked about tf2.hub, but what if you're already in 1.X and you want to upgrade? We have a couple of things available for you to do this. We have migration guides. If you go to a tensorflow.org, there's a lot of information available for you there that will help you look for your specific use case and help you migrate from 1.X to 2.0. We also have the tf.compat.v1 module which helps with backwards compatibility if there's differences in the APIs or things that you may have in 1.X that haven't yet been ported over to 2.0, and then we also have an upgrade script available for you. For anyone who's ever done a Python 2 to 3 conversion, this will have a fairly similar experience. You just run the script and then it will output to you the changes between the APIs. It'll tell you what went from, what it was in 1.X and then what it ended up becoming in 2.0.

Getting Started

What if you just want to get started, not necessarily upgrading, you just want to learn more about the project and how you can begin to leverage it? We have tons of resources available for you. The first thing that you want to end up doing is just pip installing it. It's super easy, the Beta is available for you now. Anyone could open up your laptop, I hope to see some of you do it after the talk.

You can also go to tensorflow.org and start learning about what you have, what's available. We have code labs, we have the documentation, there's just a plethora of information available there, so that's definitely the first place that I would suggest going to.

We also have two courses that are available for you. We have partnerships with Coursera and Udacity that will help you get started using this. Definitely suggest either one of these, they're really great. We're also working with Andrew Ng of deeplearning.ai, who's very heavily involved in the deep learning community working with us and you can get started taking courses there as well.

Then I want to just talk briefly about two other projects that are going on. One of these is Swift for TensorFlow and the other one is TensorFlow.js. We're heavily investing in Swift for TensorFlow right now, Swift being a super exciting language. It's one that I'm definitely hoping to get more experience with in the coming months. For those of you who have used Python before, it's a great language, it is my personal favorite, but there are some shortcomings with that we feel that Swift can potentially fix, given that it's based off of C and there's a lot of things that are super exciting there. I encourage you to do some research yourself to see if this is something that you're interested in. Then TensorFlow.js, running TensorFlow in the browser, super exciting there as well. I actually learned more about this through the paper about why this exists. It's available on Archive and I definitely recommend checking it out and seeing how you can actually do deep learning in the browser. It was pretty exciting stuff.

Come find us on Github if you have any questions or if you just want to get involved in the community and start contributing. We have a very active Github community, so definitely check us out there and hope to see you there. With that as a call to action, I encourage all of you to go build. Again, I hope to see some laptops and some clicking into pip installing.

Questions and Answers

Moderator: Thank you, Brad, for the awesome introduction to deep learning and all the wonderful stuff in TensorFlow. If I'm a newbie and I want to get started, should I learn TensorFlow API or should I learn about Keras APIs?

Miro: The answer is yes. Since they’re one and the same, I definitely think that learning TensorFlow with Keras is the best way to do it. Keras it's super great, it allows you to really get started easily. If I had to list one downside of Keras that since so is much of the abstraction is removed away from you, if you're interested in learning about this stuff at a mathematical level, that won't necessarily be the way to do this. If you're interested in just getting started quickly, Keras is definitely the right way to go.

Participant 1: You're referring about reinforcement learning. What is that reinforcement learning about?

Miro: Machine learning and deep learning generally - a lot of the traditional ways of doing this is, you're training on data, so either labeled data or unlabeled data. In the event of building to classify between cats versus dogs, you're working on data that already has labels. The idea behind reinforcement learning is actually a technique, it's another type of machine learning, another subset of machine learning algorithms that don't necessarily rely on labeled or unlabeled data. It generally learns from experiences.

One example that came a few years ago was an agent playing "Super Mario," and the way that it was able to do that was the computer would control the character, and then given how far along in the game the character would get or how many points that Mario would end up getting, the computer would then learn that this is either that it's doing good things or doing bad things. It's given some reward function, so it knows what's good to do and what's bad to do and by doing these initially random actions but then learning from these it's able to improve. A lot of them, like game AI nowadays uses reinforcement learning to help create better agents.

Participant 2: Can you talk a little bit more about TensorFlow on edge, especially with 2.0? Are there any upgrades? For example, if I want to run image classify or something on Raspberry Pi, with 1.0, I used to get a frame rate like Phi U FPS or something or otherwise go for a low-resolution camera. With 2.0, are there any significant improvements or anything like that for age devices?

Miro: To be honest with you, I don't spend as much time with TensorFlow Lite, so I can't necessarily tell you specific things. If you want to chat afterward though, I'd be happy to point you to some resources if you want it to learn some more about some of the differences.

Participant 3: I have two questions. One is, in the financial industry, what are the use cases so far, real use cases from large banks or maybe brokerage houses? The second question is, what's the best Thai restaurant in New York City?

Miro: Let's start with the first question. One of the common use cases that I see for this stuff is actually in fraud detection. Given the velocity that credit cards and other transactions occur, we can actually use machine learning to predict whether or not a transaction is fraudulent or not. That's a consistent use case that I see. There's also some work in doing time series analysis, so in the events of prices changing in the stock market you can sometimes build some models. Another common model for doing that is something called Monte-Carlo, there's a lot of toy examples online of people doing that, so that might be another place to explore that.

Then the best Thai restaurant, I'm drawing a blank on the name. Someone, a good friend of mine just recommended one that's supposedly it's the best I've ever had, also big into Thai. One I personally really like is called Lantern, which is near the East Village. Definitely recommend that

Participant 4: Thank you very much for this. I don't know how much you are familiar with your competitors, but we started out working with TensorFlow, we switched to PyTorch because it had a more pleasant API at the time. Now we're being enticed back to TensorFlow by Swift for TensorFlow. I wonder if you could help to entice me further. Are there further enticements about TensorFlow 2.0, that would make me want to switch back even faster?

Miro: First off, I want to say, I love PyTorch. I think it's an absolutely awesome framework. Something that I have noticed in both TensorFlow and PyTorch is that they're both, I've seen features exist first in one and then show up in the other for a bevy reasons. I think people will have their own opinions on this, but something that I love about TensorFlow is that, since it's been around longer there are more resources available for specific problems that you may have. That might be one instance. The fact that you can also use TensorFlow across multiple different environments, so something like on edge. I'm not sure if PyTorch has Edge capabilities, but you could use Swift, you can use Javascript. It's more flexible and you have the different options especially if you have the same model. If you're building an application and you want it to run on the browser, you want it to run an Edge, from my experience of using this, TensorFlow may be better suited for that. I think those are the big ones, but, again, I think Python is great and I think that both of them definitely have their strengths and weaknesses.

Participant 5: As a software engineer, what would you recommend to learn more about these topics?

Miro: It depends on where you want to start. Generally, if you want to really dive into this stuff and learn it from square one, I recommend a Coursera course written by Andrew Ng, which is just an introduction to machine learning. I think it's a super great place to start. If you already know machine learning or you don't necessarily care as much, he also has a specialization on an intro to deep learning on Coursera. I have taken both, I have taken several courses and I always end up coming back to these. They're just really good.

 

See more presentations with transcripts

Recorded at:

Aug 19, 2019

BT