Tensorflow 2.0-alpha was released a couple of days ago with a bunch of exciting features. It can be installed by following command:

pip install -U –pre tensorflow

In this post I explore the 17 most key features among them. The purpose is to make this short, crisp but touch on all the major pointers.

  1. Improvement in tf.keras high level API: Tensorflow 2.0 takes the compatibility between imperative Keras and DAG driven tensorflow to next level by adopting tf.keras to its core. This will make prototyping and productionizing deep learning models fast. Also, this will engage and bring more developers towards deep learning, keras being more intuitive.
  2. Eager execution by default: No need to create an interactive session to execute the graph. TF 2.0 introduces the eager execution by default moving all the session related boilerplate under the hood.
  3. Compact and improved documentation: TF 2.0 offers better organized documentation. Most of it is available here: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf
  4. Clarity: 2.0 takes clarity to the next level by removing various duplicate functionalities like multiple versions of GRU and LSTM cells available. 2.0 takes care of choosing the optimum node according to hardware giving developer a single unified library to choose from for instance one implementation of LSTM and one for GRU.
  5. Low Level API: Full low level API in tf.raw_ops with inheritable interfaces for variables, checkpoints and layers to define your own components.
  6. Easy Up-gradation: Conversion script in tf_upgrade_v2 to convert TF 1.0 code into TF 2.0 code automatically. Just write: !tf_upgrade_v2 –infile <input_file> –outfile <output_file>
  7. Backward compatibility: Comes with a separate backward compatibility module tf.compat.v1 for getting the older components.
  8. One optimizer, one losses module, one layers module: Unified optimizers module in tf.keras.optimizer.*. Similarly one losses and one layers module under tf.keras.losses.* and tf.keras.layers.*.
  9. Better graphical visualization: 2.0 gives better graph visualizations in Tensorboard even for keras models.
  10. Easy to distribute: Provides more options for scaling and multi GPU training via tf.distribute.Strategy module. strategy = tf.distribute.MirroredStrategy()
    with strategy.scope():
    <define the model here>
  11. Save and Import keras model: Easy to save and load the keras models by using tf.keras.experimental.*.
  12. Run Keras On TPUs: 2.0 comes with tf.distribute.experimental.TPUStrategy() that will allow the keras code to run on TPUs.
  13. New datasets available: 2.0 comes with new datasets to test the models on in vision, audio and text domain.
  14. More pre-trained models at TF Hub: More pre-trained models from the world of NLP and Vision available at Tensorflow Hub.
  15. Improved error reporting: Improved error reporting with exact line number and full call stack.
  16. TFFederated: TF federated to support federated learning on edge devices.
  17. Swift support and Fast AI: 2.0 to come with a Swift library. Jeremy Howard will be delivering a course on the same.

Source: Tensorflow Dev Summit 2019