One of the most active areas of research in the field of Artificial Intelligence is Language Modeling. From AI assistants like Google Assistant, Siri or Alexa, to machine translation and text generation, this field is filled filled with innovation and opportunities. Language Modeling is powered by Recurrent Neural Networks (RNNs), and this course will not only teach you to understand what they are and why they are so efficient, but to two of the most popular types of models – Generative and Discriminative.
You’ll learn all about:
- Embeddings (including bag of words, one hot encoders, and tuning your Vocabulary size)
- Using Sampling to generate text
- Training your RNNs using Backpropagation through time (BPTT) to write code that generates code
- Problems with Vanilla RNNs (Vanishing Gradient and Exploding Gradient)
- Solving the Vanishing Gradient Problem using Long Short Term Memory Cells (LSTM)
- Gated Recurrent Units (GRU) – the pros and cons versus the LSTM Cell
Once you’ve mastered these concepts, you will go on to build two RNNs – you’ll begin with one which classifies Movie Reviews for you, before creating your own Text Generator RNN, which – if you train it with enough data – will even write code for you!