Recurrent Neural Networks: Text Generation

One of the most active areas of research in the field of Artificial Intelligence is Language ModelingFrom AI assistants like Google Assistant, Siri or Alexa, to machine translation and text generation, this field is filled filled with innovation and opportunities. Language Modeling is powered by Recurrent Neural Networks (RNNs), and this course will not only teach you to understand what they are and why they are so efficient, but to two of the most popular types of models – Generative and Discriminative. 

You’ll learn all about: 
  • Embeddings (including bag of words, one hot encoders, and tuning your Vocabulary size)
  • Using Sampling to generate text
  • Training your RNNs using Backpropagation through time (BPTT) to write code that generates code
  • Problems with Vanilla RNNs (Vanishing Gradient and Exploding Gradient)
  • Solving the Vanishing Gradient Problem using Long Short Term Memory Cells (LSTM)
  • Gated Recurrent Units (GRU) – the pros and cons versus the LSTM Cell

Once you’ve mastered these concepts, you will go on to build two RNNs – you’ll begin with one which classifies Movie Reviews for you, before creating your own Text Generator RNN, which – if you train it with enough data – will even write code for you!

Intermediate Python programming skills and familiarity with Artificial Neural Networks

Tools and Frameworks

Python 3.5, Anaconda 5.0, NumPy 1.13, Tensorflow 1.4, Keras 2.1

Buy Now For $50

OR access ALL Zenva courses with our subscription.

  • Access all 250+ courses
  • New courses added monthly
  • Cancel anytime
  • Certificates of completion

Subscribe

New members: get 7 days of full access for freeClaim Offer
+
Don't miss out! Offer ends in