Introduction to Bilinear Interpolation.
Meta-Learning
This post introduce a new field of machine learning, i.e., meta learning, whose aim is to learn the “learn ability”. For example, we train a model on multiple learning tasks, such as speech recognition, image classification and so on, then when it perform a new task like text classification, it will learn faster and better since it has seen a lot of similar recognition tasks. In a nutshell, meta-learning algorithms produce a base network that is specifically designed for quick adaptation to new tasks using few-shot data.
DP-Graph Convolutional Networks
Graph Convolutional Networks (GCNs) is one type of neural networks designed to work directly on graphs and leverage their structural information. In this post, I will introduce how information is propagated through the hidden layers of a GCN.
NLP-TorchText
If you’ve ever worked on a project for deep learning for NLP, you’ll know how painful and tedious all the preprocessing is. Before you start training your model, you have to:
- Read the data from disk
- Tokenize the text
- Create a mapping from word to a unique integer
- Convert the text into lists of integers
- Load the data in whatever format your deep learning framework requires
- Pad the text so that all the sequences are the same length, so you can process them in batch
Torchtext is a library that makes all the above processing much easier.
DP-Clasical CNNs
In this post, we’ll go into summarizing a lot of the new and important developments in the field of computer vision and convolutional neural networks.
The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3)
NLP-Seq2Seq
There’s a whole class of NLP tasks that rely on sequential output, or outputs that are sequences of potentially varying length. For example,
- Translation: taking a sentence in one language as input and outputting the same sentence in another language.
- Conversation: taking a statement or question as input and responding to it.
- Summarization: taking a large body of text as input and outputting a summary of it.
Therefore, in this post, sequence-to-sequence models are introduced, a deep learning-based framework for handling these types of problems.
NLP-Dependency Parser
Dependency structure of sentences shows which words depend on (modify or are arguments of) which other words. These binary asymmetric relations between the words are called dependencies and are depicted as arrows going from the head (or governor, superior, regent) to the dependent (or modifier, inferior, subordinate).