Aryan Singh's World

Data Science, Statistics, ML, Deep Learning

Tensorflow 2.0 in 2 minutes

Tensorflow 2.0-alpha was released a couple of days ago with a bunch of exciting features. It can be installed by following command:

pip install -U –pre tensorflow

In this post I explore the 17 most key features among them. The purpose is to make this short, crisp but touch on all the major pointers.

  1. Improvement in tf.keras high level API: Tensorflow 2.0 takes the compatibility between imperative Keras and DAG driven tensorflow to next level by adopting tf.keras to its core. This will make prototyping and productionizing deep learning models fast. Also, this will engage and bring more developers towards deep learning, keras being more intuitive.
  2. Eager execution by default: No need to create an interactive session to execute the graph. TF 2.0 introduces the eager execution by default moving all the session related boilerplate under the hood.
  3. Compact and improved documentation: TF 2.0 offers better organized documentation. Most of it is available here: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf
  4. Clarity: 2.0 takes clarity to the next level by removing various duplicate functionalities like multiple versions of GRU and LSTM cells available. 2.0 takes care of choosing the optimum node according to hardware giving developer a single unified library to choose from for instance one implementation of LSTM and one for GRU.
  5. Low Level API: Full low level API in tf.raw_ops with inheritable interfaces for variables, checkpoints and layers to define your own components.
  6. Easy Up-gradation: Conversion script in tf_upgrade_v2 to convert TF 1.0 code into TF 2.0 code automatically. Just write: !tf_upgrade_v2 –infile <input_file> –outfile <output_file>
  7. Backward compatibility: Comes with a separate backward compatibility module tf.compat.v1 for getting the older components.
  8. One optimizer, one losses module, one layers module: Unified optimizers module in tf.keras.optimizer.*. Similarly one losses and one layers module under tf.keras.losses.* and tf.keras.layers.*.
  9. Better graphical visualization: 2.0 gives better graph visualizations in Tensorboard even for keras models.
  10. Easy to distribute: Provides more options for scaling and multi GPU training via tf.distribute.Strategy module. strategy = tf.distribute.MirroredStrategy()
    with strategy.scope():
    <define the model here>
  11. Save and Import keras model: Easy to save and load the keras models by using tf.keras.experimental.*.
  12. Run Keras On TPUs: 2.0 comes with tf.distribute.experimental.TPUStrategy() that will allow the keras code to run on TPUs.
  13. New datasets available: 2.0 comes with new datasets to test the models on in vision, audio and text domain.
  14. More pre-trained models at TF Hub: More pre-trained models from the world of NLP and Vision available at Tensorflow Hub.
  15. Improved error reporting: Improved error reporting with exact line number and full call stack.
  16. TFFederated: TF federated to support federated learning on edge devices.
  17. Swift support and Fast AI: 2.0 to come with a Swift library. Jeremy Howard will be delivering a course on the same.

Source: Tensorflow Dev Summit 2019

What do NaMo’s speeches convey?

This weekend while wandering around the labyrinths of internet, I stumbled upon the corpus of Indian prime minister Mr. Narendra Modi’s speeches. I thought it would be interesting to analyse the speeches to see what are the main issues he speaks about and what is the overall connotation of the speeches. In this blog, I present my analysis of the speeches along with the visualizations in the form of graphs and plots.

Unigram and Bigram Frequency

I used count vectoriser from sklearn feature extraction to vectorise the text into frequency vectors and then summed it over the rows to find the frequency of each word in the overall corpus. Later I plotted the top 30 words by frequency on a barplot to analyse them. Following is the result I got:

wordfreq
Since mann ki baat is a program aimed at listening to and addressing problems of people PMs main focus is on issues relating to poverty and water. Also he talks about taking actions by using phrases like time, make, great.

Most Frequent Nouns, Adjective and Verbs

Next up I thought it would be interesting to do POS tagging of each speech to see what are the major issues PM lays stress upon and how positive/willing he is to solve them. For this I pos tagged the whole corpus using nltk and then found out the most common nouns, verbs and adjectives out of it. I plotted 16 most common nouns, adjectives and verbs in the form a word cloud to visualise and draw inferences. Here is what I got:

Nouns Cloud:

nouncloud

It is clear from the word cloud that the main issues that are being highlighted are related to basic amenities like water, villages, farmers. Interestingly enough, yoga is repeatedly a frequent part of the conversation. PM has also addressed issues about black money but the frequency is on the lower side.

Verbs Cloud:

verb

The verbs mostly have a positive connotation. Words like think, make, started and done indicate the action oriented approach.

Adjectives Cloud:

adjectives

Adjectives do reveal a basic essence of the major fields/issues that PM is targetting. Youth, poor and digital India initiatives are some of the most frequent areas touched upon.

Sentiment Analysis Of The Speeches

Next up I analysed each speech for it’s sentiment score to understand whether the connotation is positive/negative and how it has changed overtime. I use TextBlob library to sentiment score each speech. Here is how the time series sentiment score looks like:

sentiment

Looking at the overall analysis, the speeches don’t seem to be that positive. This might be because they are aimed at addressing the issues people face on daily basis. Overall years 2015 and 2016 are more positive as compared to the other years.

 

Code: https://github.com/aryancodify/NaturalLanguageProcessing/blob/master/modi_speeches.ipynb

Dataset: https://www.kaggle.com/shankarpandala/mann-ki-baat-speech-corpus

Gloria : An IoT Enabled Chatbot Ecosystem.

Over the years, inanimate things talking has been an intriguing subject. Movies like Beauty And The Beast have added an uncanny whimsy around the subject. Five years ago if somebody would have told me that your fridge will talk to you one day and tell you when you need to buy milk and vegetables, I would have shrugged my shoulders and laughed it off too. But the advancement in artificial intelligence and natural language processing has brought this dream within the cusps of reality. Simultaneously the development in the internet of things sphere has provided a medium of communication to all the devices surrounding us. Gloria is an ecosystem that lies at the crossroads of these three incredible phenomena. It combines AI, NLP, and IoT into a compact system that lets you control everything surrounding you at the command of your voice.

 

gloria_flow_diagram-2

Gloria Architecture and Flow

An android application listens for the voice commands from the user. It converts these voice commands to text using android.speech library. A server hosts a Mosquitto message broker and a java application with artificial intelligence markup language component. The text command is transferred to the java application via the MQTT Message Broker. This message is processed by the AIML component and then the corresponding reply is sent back to the smartphone over the MQTT broker. Also, this conversation is saved into the Cassandra database for future references. Depending on the reply received the smartphone either goes to Google API to fetch information requested by the user like weather and news or controls one of the appliances via the Bluetooth Low Energy and Arduino circuit. Simultaneously, the android application converts this textual reply to speech and conveys it back to the user. Thus Gloria helps the user to get the information he needs and empowers him to control his environment just by giving vocal commands to a mobile application.

Videos:

Gloria being an ecosystem has various applications and use cases :

  1. Helping the differently abled and old people to control the appliances without any hindrance.
  2. Helps the retail business owners by providing a touch of personalization to their customers. A customer could request information like store location, merchandise availability, and various other FAQs and would get the detailed information in the human voice.
  3. Moreover, such devices could be placed in stores where user instead of looking for an employee asks about the product he/she wants into one of the android/ios devices placed on the kiosk. The device not only points the customer to the exact location of the product, it also tells the customer about the offers on that product.
  4. Smart homes and offices are two of the major applications.
  5. Voice controlled security solutions with theft monitoring.
  6. Set reminders for meetings and alarms to wake up.
  7. Retrieve any information from the internet.
  8. Learn what user tells it and then give suggestions on basis of that.
  9. Auto-replies to calls with customised messages.

Awards :

  1. Winner of Global POC Challenge 2016.

globalpocwin

2.  Fourth most powerful smart city idea at Nasscom’s TachNgage Smart Cities Hackathon.

img-20160924-wa0007              NasscomTechNgage

Source Code :

Gloria Source Code.