BT

Your opinion matters! Please fill in the InfoQ Survey!

Overview of Changes in Tensorflow Version 1.3

| by Roland Meertens Follow 3 Followers on Jul 31, 2017. Estimated reading time: 3 minutes | NOTICE: QCon.ai - Applied AI conference for Developers Apr 9-11, 2018, San Francisco. Join us!

A note to our readers: As per your request we have developed a set of features that allow you to reduce the noise, while not losing sight of anything that is important. Get email and web notifications by choosing the topics you are interested in.

Although it has only been a month since the release of version 1.2.1, there have been many changes to the software in version 1.3. Developers can find an extensive release report on the Github page of Tensorflow. This article will list the most important changes developers have to know about before and after upgrading to Tensorflow v1.3. 

From cuDNN 5.1 to cuDNN 6

Developers upgrading from 1.2.1 to version 1.3 also need to update the cuDNN version on their pc. The binary distribution is compiled with NVIDIAs cuDNN 6, where 1.2.1 used cuDNN 5.1. Developers who don't want to upgrade can still build their own binaries from the source code. The new cuDNN version has significant performance improvements for softmax layers. An interesting feature that added to cuDNN 6 are Dilated Convolutions. Tensorflow already supports this operation, but does not yet use the new cuDNN functionality for this. Note that since Tensorflow release version 1.1.0 the GPU on Mac is not supported anymore. Although the developers still welcome patches, there is no guarantee you can get it to work. 

The tf.contrib.data.Dataset class

The tf.contrib.data.Dataset class got several important changes. Using this class developers can create a uniform input pipeline of their data from Tensors in memory, files on disk, in many data formats. It can also be used to apply functions to each individual element using the Dataset.map() function, or to all elements with the Dataset.batch() function. Functions in this class that expected nested structures now convert lists to tf.Tensor objects implicitly. Users who don't want this should use tuples instead. There are also several new functions in the Dataset class:

  • Dataset.list_files(file_pattern): returns a Dataset of strings corresponding to file names that match the file_pattern argument 
  • Dataset.interleave(map_func, cycle_length): gives programmers more control over how a function is mapped to each element. It still applies map_func across the whole dataset but interleaves the results, useful to process many input files concurrently. 
  • ConcatenateDataset: a class that extends the Dataset class. The init function takes two datasets, which will be concatenated in the new class using the already existing Dataset.concatenate() function. 

For more information, developers should take a look at the guide on using the Dataset class in the programmers guide on Github.

High level API functions and statistical distributions

Although there are already many high level API functions available for Keras and TFLearn users, Tensorflow added the following functions to their library: DNNClassifier, DNNRegressor, LinearClassifer, LinearRegressor, DNNLinearCombinedClassifier, DNNLinearCombinedRegressor. These estimators are part of the tf.contrib.learn package, and the documentation of Tensorflow describes how to use them.

A new addition are the many statistical distributions. A class represents a statistical distribution, and is initialized with parameters that define this distribution. There are already many univariate and multivariate distributions present. Developers can also extend existing classes, but have to keep support all functions present in the Distribution base class. In case of invalid properties, developers can ask their program to throw an exception, or they can choose to deal with the NaN values. A short example of how developers can get a tensor with random variables from a uniform distribution is listed below: 

 

Changes to existing functions

There are also some small changes to existing functions. The tf.gather function, used to index within a particular axis of a tensor, now has an axis parameter allowing for more flexible gathering.  

The tf.pad function, used to put values around an existing tensor, now supports argument in the "CONSTANT" state. Previously the argument 0 was added as padding to an existing tensor, now users can specify what value they want to padd with. Modes that were already available were "REFLECT" and "SYMMETRIC". 

Leave your feedback

Although this news article covers the most important changes, there are many additional changes and features developers can deem important. I invite developers to add information about version 1.3 of Tensorflow, additional changes I forgot to write about, pitfalls you found in existing code, or anything else in the comments below. Visitors that are not yet registered with InfoQ can do so, and help many fellow developers. 

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Ask for help by Roland Meertens

Hey everyone, I'm the author of this article.
Tensorflow releases go very fast nowadays, and I noticed it's hard to keep up. This weekend I spent some time going over the changelog, searching for changed parts of Tensorflow that might be important for me and others.
If you have any questions about this version update, or in general about Tensorflow, leave a comment under the article!

selu activation by He No

Introduction of selu activation to TF.contrib.keras is significant in TF1.3. Not sure if it's been introduced in any other section of the codebase yet. Check out selu, on of the more significant advances in deep learning over the past few years.

"CONSTANT" mode not new by Lasse Borgholt

Just a heads up: The "CONSTANT" mode is not new. What is new, however, is the option to pad with a different value than zero when using tf.pad with "CONSTANT" mode. This argument is called "constant_values" which probably have caused the confusion.

Re: selu activation by Roland Meertens

Mmm, looks like selu has added so recently that it is not yet in the official TF1.3 (maybe in RC2?). Great point by the way! Thank you for adding it. If you see anything else, let me know!

Re: "CONSTANT" mode not new by Roland Meertens

Thanks Lasse! Will change the article! If you see anything else, let me know!

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

5 Discuss

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and don't miss out on content that matters to you

BT